Test Report: Docker_Linux_docker_arm64 17822

                    
                      1b14f6e8a127ccddfb64acb15c203e20bb49b800:2023-12-18:32341
                    
                

Test fail (3/330)

Order failed test Duration
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
35 TestAddons/parallel/Ingress 38.66
174 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.63
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/kube-apiserver_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/kube-scheduler_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/kube-proxy_v1.29.0-rc.2" but got error: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/kube-proxy_v1.29.0-rc.2: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/pause_3.9" but got error: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/pause_3.9: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/etcd_3.5.10-0" but got error: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/etcd_3.5.10-0: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/coredns/coredns_v1.11.1" but got error: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/registry.k8s.io/coredns/coredns_v1.11.1: no such file or directory
aaa_download_only_test.go:132: expected image file exist at "/home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/gcr.io/k8s-minikube/storage-provisioner_v5" but got error: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/linux/gcr.io/k8s-minikube/storage-provisioner_v5: no such file or directory
--- FAIL: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (38.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-277112 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-277112 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-277112 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4b8fba9b-be04-448f-ba58-b9f75375fb75] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4b8fba9b-be04-448f-ba58-b9f75375fb75] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.015106078s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-277112 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.053021369s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-277112 addons disable ingress-dns --alsologtostderr -v=1: (1.736070507s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-277112 addons disable ingress --alsologtostderr -v=1: (7.748151212s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-277112
helpers_test.go:235: (dbg) docker inspect addons-277112:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "637af6e2f1f48e9f8fd92cb3a3efa4c8da1cdd2583520b6ec7a97e3137c225fc",
	        "Created": "2023-12-18T22:37:38.469632846Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8552,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-18T22:37:38.790657179Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/637af6e2f1f48e9f8fd92cb3a3efa4c8da1cdd2583520b6ec7a97e3137c225fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/637af6e2f1f48e9f8fd92cb3a3efa4c8da1cdd2583520b6ec7a97e3137c225fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/637af6e2f1f48e9f8fd92cb3a3efa4c8da1cdd2583520b6ec7a97e3137c225fc/hosts",
	        "LogPath": "/var/lib/docker/containers/637af6e2f1f48e9f8fd92cb3a3efa4c8da1cdd2583520b6ec7a97e3137c225fc/637af6e2f1f48e9f8fd92cb3a3efa4c8da1cdd2583520b6ec7a97e3137c225fc-json.log",
	        "Name": "/addons-277112",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-277112:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-277112",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5767e47cdc152950a21cba16338d6150d276c61a0e9eaadfc76ca45aff9b28f4-init/diff:/var/lib/docker/overlay2/bc6e43a078e26c3419854bafc48fcee558a938ae61de23978bcedc185e547bd8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5767e47cdc152950a21cba16338d6150d276c61a0e9eaadfc76ca45aff9b28f4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5767e47cdc152950a21cba16338d6150d276c61a0e9eaadfc76ca45aff9b28f4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5767e47cdc152950a21cba16338d6150d276c61a0e9eaadfc76ca45aff9b28f4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-277112",
	                "Source": "/var/lib/docker/volumes/addons-277112/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-277112",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-277112",
	                "name.minikube.sigs.k8s.io": "addons-277112",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e63c5de12616dc901966ff70b97c2c7131554b0eb1898233169c42e032a9869",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1e63c5de1261",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-277112": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "637af6e2f1f4",
	                        "addons-277112"
	                    ],
	                    "NetworkID": "936e60861fc511156051669650b66b9f29c0b399e8025c18c88461a1cdb317eb",
	                    "EndpointID": "aa268c07e0b581d2720b69473aee9df541cf46cb387c5f3a9b446e5bee50817b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-277112 -n addons-277112
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-277112 logs -n 25: (1.168560347s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-196183   | jenkins | v1.32.0 | 18 Dec 23 22:36 UTC |                     |
	|         | -p download-only-196183                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-196183   | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |                     |
	|         | -p download-only-196183                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC | 18 Dec 23 22:37 UTC |
	| delete  | -p download-only-196183                                                                     | download-only-196183   | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC | 18 Dec 23 22:37 UTC |
	| delete  | -p download-only-196183                                                                     | download-only-196183   | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC | 18 Dec 23 22:37 UTC |
	| start   | --download-only -p                                                                          | download-docker-633534 | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |                     |
	|         | download-docker-633534                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-633534                                                                   | download-docker-633534 | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC | 18 Dec 23 22:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-625373   | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |                     |
	|         | binary-mirror-625373                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35709                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-625373                                                                     | binary-mirror-625373   | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC | 18 Dec 23 22:37 UTC |
	| addons  | disable dashboard -p                                                                        | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |                     |
	|         | addons-277112                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |                     |
	|         | addons-277112                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-277112 --wait=true                                                                | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC | 18 Dec 23 22:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-277112 ip                                                                            | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:39 UTC | 18 Dec 23 22:39 UTC |
	| addons  | addons-277112 addons disable                                                                | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:39 UTC | 18 Dec 23 22:39 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-277112 addons                                                                        | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | addons-277112                                                                               |                        |         |         |                     |                     |
	| addons  | addons-277112 addons                                                                        | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-277112 addons                                                                        | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-277112 ssh curl -s                                                                   | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-277112 ip                                                                            | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	| addons  | disable nvidia-device-plugin                                                                | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | -p addons-277112                                                                            |                        |         |         |                     |                     |
	| addons  | addons-277112 addons disable                                                                | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-277112 ssh cat                                                                       | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | /opt/local-path-provisioner/pvc-2c036aaf-9188-4922-a1b6-850c21e22b1b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-277112 addons disable                                                                | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-277112 addons disable                                                                | addons-277112          | jenkins | v1.32.0 | 18 Dec 23 22:40 UTC | 18 Dec 23 22:40 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:37:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:37:15.085152    8082 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:37:15.085366    8082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:37:15.085395    8082 out.go:309] Setting ErrFile to fd 2...
	I1218 22:37:15.085421    8082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:37:15.085766    8082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	I1218 22:37:15.086361    8082 out.go:303] Setting JSON to false
	I1218 22:37:15.087212    8082 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1184,"bootTime":1702937851,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1218 22:37:15.087319    8082 start.go:138] virtualization:  
	I1218 22:37:15.109893    8082 out.go:177] * [addons-277112] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 22:37:15.142424    8082 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 22:37:15.142458    8082 notify.go:220] Checking for updates...
	I1218 22:37:15.176323    8082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:37:15.206575    8082 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:37:15.238268    8082 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	I1218 22:37:15.271515    8082 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 22:37:15.300331    8082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 22:37:15.320127    8082 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:37:15.343716    8082 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 22:37:15.343835    8082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:37:15.416740    8082 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-18 22:37:15.407586095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:37:15.416856    8082 docker.go:295] overlay module found
	I1218 22:37:15.433822    8082 out.go:177] * Using the docker driver based on user configuration
	I1218 22:37:15.440756    8082 start.go:298] selected driver: docker
	I1218 22:37:15.440783    8082 start.go:902] validating driver "docker" against <nil>
	I1218 22:37:15.440797    8082 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 22:37:15.441413    8082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:37:15.518214    8082 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-18 22:37:15.509121215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:37:15.518369    8082 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 22:37:15.518579    8082 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 22:37:15.528156    8082 out.go:177] * Using Docker driver with root privileges
	I1218 22:37:15.534453    8082 cni.go:84] Creating CNI manager for ""
	I1218 22:37:15.534483    8082 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 22:37:15.534496    8082 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1218 22:37:15.534511    8082 start_flags.go:323] config:
	{Name:addons-277112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-277112 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:37:15.540315    8082 out.go:177] * Starting control plane node addons-277112 in cluster addons-277112
	I1218 22:37:15.544225    8082 cache.go:121] Beginning downloading kic base image for docker with docker
	I1218 22:37:15.548778    8082 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 22:37:15.553070    8082 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 22:37:15.553116    8082 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1218 22:37:15.553138    8082 cache.go:56] Caching tarball of preloaded images
	I1218 22:37:15.553164    8082 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 22:37:15.553222    8082 preload.go:174] Found /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1218 22:37:15.553232    8082 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 22:37:15.553575    8082 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/config.json ...
	I1218 22:37:15.553600    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/config.json: {Name:mk45c415abd6d4e0be9ddb982c6b7430fdbb49c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:15.571001    8082 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 22:37:15.571132    8082 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 22:37:15.571150    8082 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1218 22:37:15.571156    8082 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1218 22:37:15.571164    8082 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1218 22:37:15.571170    8082 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from local cache
	I1218 22:37:31.010164    8082 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 from cached tarball
	I1218 22:37:31.010203    8082 cache.go:194] Successfully downloaded all kic artifacts
	I1218 22:37:31.010266    8082 start.go:365] acquiring machines lock for addons-277112: {Name:mk49a6ff69440073ff7d88009f233bf092c93431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:37:31.010379    8082 start.go:369] acquired machines lock for "addons-277112" in 92.472µs
	I1218 22:37:31.010408    8082 start.go:93] Provisioning new machine with config: &{Name:addons-277112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-277112 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 22:37:31.010489    8082 start.go:125] createHost starting for "" (driver="docker")
	I1218 22:37:31.013127    8082 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1218 22:37:31.013387    8082 start.go:159] libmachine.API.Create for "addons-277112" (driver="docker")
	I1218 22:37:31.013421    8082 client.go:168] LocalClient.Create starting
	I1218 22:37:31.013557    8082 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem
	I1218 22:37:31.367877    8082 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem
	I1218 22:37:32.040840    8082 cli_runner.go:164] Run: docker network inspect addons-277112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 22:37:32.059461    8082 cli_runner.go:211] docker network inspect addons-277112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 22:37:32.059555    8082 network_create.go:281] running [docker network inspect addons-277112] to gather additional debugging logs...
	I1218 22:37:32.059579    8082 cli_runner.go:164] Run: docker network inspect addons-277112
	W1218 22:37:32.076706    8082 cli_runner.go:211] docker network inspect addons-277112 returned with exit code 1
	I1218 22:37:32.076735    8082 network_create.go:284] error running [docker network inspect addons-277112]: docker network inspect addons-277112: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-277112 not found
	I1218 22:37:32.076758    8082 network_create.go:286] output of [docker network inspect addons-277112]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-277112 not found
	
	** /stderr **
	I1218 22:37:32.076855    8082 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 22:37:32.094190    8082 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025b65f0}
	I1218 22:37:32.094226    8082 network_create.go:124] attempt to create docker network addons-277112 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1218 22:37:32.094282    8082 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-277112 addons-277112
	I1218 22:37:32.167837    8082 network_create.go:108] docker network addons-277112 192.168.49.0/24 created
	I1218 22:37:32.167871    8082 kic.go:121] calculated static IP "192.168.49.2" for the "addons-277112" container
	I1218 22:37:32.167943    8082 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 22:37:32.184974    8082 cli_runner.go:164] Run: docker volume create addons-277112 --label name.minikube.sigs.k8s.io=addons-277112 --label created_by.minikube.sigs.k8s.io=true
	I1218 22:37:32.203066    8082 oci.go:103] Successfully created a docker volume addons-277112
	I1218 22:37:32.203155    8082 cli_runner.go:164] Run: docker run --rm --name addons-277112-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-277112 --entrypoint /usr/bin/test -v addons-277112:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 22:37:34.433195    8082 cli_runner.go:217] Completed: docker run --rm --name addons-277112-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-277112 --entrypoint /usr/bin/test -v addons-277112:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (2.229993105s)
	I1218 22:37:34.433222    8082 oci.go:107] Successfully prepared a docker volume addons-277112
	I1218 22:37:34.433249    8082 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 22:37:34.433269    8082 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 22:37:34.433348    8082 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-277112:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 22:37:38.385722    8082 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-277112:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.952336343s)
	I1218 22:37:38.385751    8082 kic.go:203] duration metric: took 3.952479 seconds to extract preloaded images to volume
	W1218 22:37:38.385892    8082 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 22:37:38.386005    8082 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 22:37:38.453668    8082 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-277112 --name addons-277112 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-277112 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-277112 --network addons-277112 --ip 192.168.49.2 --volume addons-277112:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 22:37:38.801599    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Running}}
	I1218 22:37:38.821267    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:37:38.844177    8082 cli_runner.go:164] Run: docker exec addons-277112 stat /var/lib/dpkg/alternatives/iptables
	I1218 22:37:38.898172    8082 oci.go:144] the created container "addons-277112" has a running status.
	I1218 22:37:38.898195    8082 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa...
	I1218 22:37:39.319273    8082 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 22:37:39.344838    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:37:39.378064    8082 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 22:37:39.378083    8082 kic_runner.go:114] Args: [docker exec --privileged addons-277112 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 22:37:39.478885    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:37:39.502888    8082 machine.go:88] provisioning docker machine ...
	I1218 22:37:39.502915    8082 ubuntu.go:169] provisioning hostname "addons-277112"
	I1218 22:37:39.502975    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:39.535997    8082 main.go:141] libmachine: Using SSH client type: native
	I1218 22:37:39.536418    8082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1218 22:37:39.536431    8082 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-277112 && echo "addons-277112" | sudo tee /etc/hostname
	I1218 22:37:39.734389    8082 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-277112
	
	I1218 22:37:39.734526    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:39.752940    8082 main.go:141] libmachine: Using SSH client type: native
	I1218 22:37:39.753342    8082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1218 22:37:39.753361    8082 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-277112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-277112/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-277112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 22:37:39.905392    8082 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 22:37:39.905491    8082 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-2192/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-2192/.minikube}
	I1218 22:37:39.905544    8082 ubuntu.go:177] setting up certificates
	I1218 22:37:39.905570    8082 provision.go:83] configureAuth start
	I1218 22:37:39.905652    8082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-277112
	I1218 22:37:39.931990    8082 provision.go:138] copyHostCerts
	I1218 22:37:39.932063    8082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-2192/.minikube/cert.pem (1123 bytes)
	I1218 22:37:39.932182    8082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-2192/.minikube/key.pem (1675 bytes)
	I1218 22:37:39.932248    8082 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-2192/.minikube/ca.pem (1078 bytes)
	I1218 22:37:39.932299    8082 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-2192/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca-key.pem org=jenkins.addons-277112 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-277112]
	I1218 22:37:41.042122    8082 provision.go:172] copyRemoteCerts
	I1218 22:37:41.042186    8082 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 22:37:41.042238    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:41.060237    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:37:41.162690    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 22:37:41.189352    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1218 22:37:41.216046    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 22:37:41.242903    8082 provision.go:86] duration metric: configureAuth took 1.337308705s
	I1218 22:37:41.242994    8082 ubuntu.go:193] setting minikube options for container-runtime
	I1218 22:37:41.243202    8082 config.go:182] Loaded profile config "addons-277112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 22:37:41.243264    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:41.261020    8082 main.go:141] libmachine: Using SSH client type: native
	I1218 22:37:41.261425    8082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1218 22:37:41.261441    8082 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 22:37:41.413946    8082 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1218 22:37:41.413965    8082 ubuntu.go:71] root file system type: overlay
	I1218 22:37:41.414071    8082 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 22:37:41.414144    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:41.432376    8082 main.go:141] libmachine: Using SSH client type: native
	I1218 22:37:41.432888    8082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1218 22:37:41.432977    8082 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 22:37:41.593717    8082 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 22:37:41.593802    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:41.617667    8082 main.go:141] libmachine: Using SSH client type: native
	I1218 22:37:41.618075    8082 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1218 22:37:41.618099    8082 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 22:37:42.417511    8082 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-18 22:37:41.587049552 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1218 22:37:42.417541    8082 machine.go:91] provisioned docker machine in 2.914635266s
	I1218 22:37:42.417553    8082 client.go:171] LocalClient.Create took 11.404124759s
	I1218 22:37:42.417564    8082 start.go:167] duration metric: libmachine.API.Create for "addons-277112" took 11.404177255s
	I1218 22:37:42.417572    8082 start.go:300] post-start starting for "addons-277112" (driver="docker")
	I1218 22:37:42.417581    8082 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 22:37:42.417648    8082 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 22:37:42.417696    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:42.436736    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:37:42.538959    8082 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 22:37:42.542742    8082 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 22:37:42.542776    8082 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 22:37:42.542788    8082 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 22:37:42.542795    8082 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 22:37:42.542805    8082 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-2192/.minikube/addons for local assets ...
	I1218 22:37:42.542873    8082 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-2192/.minikube/files for local assets ...
	I1218 22:37:42.542900    8082 start.go:303] post-start completed in 125.322207ms
	I1218 22:37:42.543198    8082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-277112
	I1218 22:37:42.560996    8082 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/config.json ...
	I1218 22:37:42.561265    8082 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 22:37:42.561316    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:42.583269    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:37:42.682205    8082 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 22:37:42.687484    8082 start.go:128] duration metric: createHost completed in 11.67698057s
	I1218 22:37:42.687538    8082 start.go:83] releasing machines lock for "addons-277112", held for 11.677146315s
	I1218 22:37:42.687610    8082 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-277112
	I1218 22:37:42.705346    8082 ssh_runner.go:195] Run: cat /version.json
	I1218 22:37:42.705403    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:42.705464    8082 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 22:37:42.705527    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:37:42.725643    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:37:42.725956    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:37:42.957340    8082 ssh_runner.go:195] Run: systemctl --version
	I1218 22:37:42.962542    8082 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 22:37:42.967677    8082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1218 22:37:42.996308    8082 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1218 22:37:42.996417    8082 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 22:37:43.028024    8082 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1218 22:37:43.028100    8082 start.go:475] detecting cgroup driver to use...
	I1218 22:37:43.028145    8082 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 22:37:43.028278    8082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 22:37:43.047200    8082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 22:37:43.058938    8082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 22:37:43.070029    8082 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 22:37:43.070102    8082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 22:37:43.081101    8082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 22:37:43.092086    8082 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 22:37:43.103018    8082 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 22:37:43.113474    8082 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 22:37:43.124685    8082 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 22:37:43.135528    8082 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 22:37:43.144649    8082 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 22:37:43.153963    8082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:37:43.236676    8082 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 22:37:43.359484    8082 start.go:475] detecting cgroup driver to use...
	I1218 22:37:43.359530    8082 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 22:37:43.359579    8082 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 22:37:43.377265    8082 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1218 22:37:43.377336    8082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 22:37:43.391522    8082 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 22:37:43.409684    8082 ssh_runner.go:195] Run: which cri-dockerd
	I1218 22:37:43.414178    8082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 22:37:43.424060    8082 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 22:37:43.449827    8082 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 22:37:43.558412    8082 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 22:37:43.673051    8082 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 22:37:43.673170    8082 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 22:37:43.701307    8082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:37:43.797235    8082 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 22:37:44.057002    8082 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 22:37:44.142095    8082 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 22:37:44.225976    8082 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 22:37:44.310894    8082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:37:44.402377    8082 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 22:37:44.417713    8082 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:37:44.511494    8082 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1218 22:37:44.594520    8082 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1218 22:37:44.594657    8082 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1218 22:37:44.599570    8082 start.go:543] Will wait 60s for crictl version
	I1218 22:37:44.599662    8082 ssh_runner.go:195] Run: which crictl
	I1218 22:37:44.603977    8082 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 22:37:44.657425    8082 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1218 22:37:44.657532    8082 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 22:37:44.687255    8082 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 22:37:44.716879    8082 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1218 22:37:44.717004    8082 cli_runner.go:164] Run: docker network inspect addons-277112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 22:37:44.733921    8082 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 22:37:44.738365    8082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 22:37:44.751088    8082 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 22:37:44.751154    8082 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 22:37:44.771692    8082 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1218 22:37:44.771735    8082 docker.go:601] Images already preloaded, skipping extraction
	I1218 22:37:44.771800    8082 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 22:37:44.791747    8082 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1218 22:37:44.791770    8082 cache_images.go:84] Images are preloaded, skipping loading
	I1218 22:37:44.791826    8082 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1218 22:37:44.850334    8082 cni.go:84] Creating CNI manager for ""
	I1218 22:37:44.850357    8082 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 22:37:44.850386    8082 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 22:37:44.850404    8082 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-277112 NodeName:addons-277112 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 22:37:44.850538    8082 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-277112"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 22:37:44.850606    8082 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-277112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-277112 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 22:37:44.850669    8082 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 22:37:44.861050    8082 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 22:37:44.861118    8082 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 22:37:44.870903    8082 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1218 22:37:44.890739    8082 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 22:37:44.911091    8082 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1218 22:37:44.931079    8082 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 22:37:44.935143    8082 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 22:37:44.947680    8082 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112 for IP: 192.168.49.2
	I1218 22:37:44.947756    8082 certs.go:190] acquiring lock for shared ca certs: {Name:mkcf78e809e515e2090b1ff7ca96510a1c2d2b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:44.947897    8082 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17822-2192/.minikube/ca.key
	I1218 22:37:45.130527    8082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt ...
	I1218 22:37:45.130557    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt: {Name:mk5aed99246f5d2c9e614e8e872719b372bb2f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:45.130755    8082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-2192/.minikube/ca.key ...
	I1218 22:37:45.130770    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/ca.key: {Name:mka010b4b3eb11c9b6d1141e3ca31e7b627c7391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:45.130859    8082 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.key
	I1218 22:37:45.639154    8082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.crt ...
	I1218 22:37:45.639182    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.crt: {Name:mkd66d5db76e2df9e6b158fa03f184be57ac2af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:45.639361    8082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.key ...
	I1218 22:37:45.639373    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.key: {Name:mk0b149c9241625868b4bde17b472a43f71eefa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:45.639481    8082 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.key
	I1218 22:37:45.639499    8082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt with IP's: []
	I1218 22:37:46.165328    8082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt ...
	I1218 22:37:46.165357    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: {Name:mk534375d5f180211551f5a73108a0f1ff21e56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:46.165535    8082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.key ...
	I1218 22:37:46.165549    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.key: {Name:mk62d0acdf44dd385e3a0855737be17e4664d622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:46.165629    8082 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.key.dd3b5fb2
	I1218 22:37:46.165650    8082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 22:37:46.781087    8082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.crt.dd3b5fb2 ...
	I1218 22:37:46.781115    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.crt.dd3b5fb2: {Name:mk7faa1d51ccf9ec58b8a6117613d7f813f79c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:46.781292    8082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.key.dd3b5fb2 ...
	I1218 22:37:46.781305    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.key.dd3b5fb2: {Name:mk2157a2e13df3fd153421c587da6594d68be9a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:46.781383    8082 certs.go:337] copying /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.crt
	I1218 22:37:46.781461    8082 certs.go:341] copying /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.key
	I1218 22:37:46.781517    8082 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/proxy-client.key
	I1218 22:37:46.781535    8082 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/proxy-client.crt with IP's: []
	I1218 22:37:47.044135    8082 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/proxy-client.crt ...
	I1218 22:37:47.044163    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/proxy-client.crt: {Name:mk13060043c96469ff682d227b4da5389ff8cdfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:47.044335    8082 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/proxy-client.key ...
	I1218 22:37:47.044346    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/proxy-client.key: {Name:mkcf29d9bc2bd2c822188ac94d2780b894130741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:37:47.044525    8082 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca-key.pem (1679 bytes)
	I1218 22:37:47.044585    8082 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem (1078 bytes)
	I1218 22:37:47.044615    8082 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem (1123 bytes)
	I1218 22:37:47.044644    8082 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/key.pem (1675 bytes)
	I1218 22:37:47.045229    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 22:37:47.072467    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 22:37:47.099091    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 22:37:47.125946    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 22:37:47.152262    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 22:37:47.179312    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 22:37:47.206288    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 22:37:47.232467    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 22:37:47.258995    8082 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 22:37:47.285319    8082 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 22:37:47.305185    8082 ssh_runner.go:195] Run: openssl version
	I1218 22:37:47.311693    8082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 22:37:47.323234    8082 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:37:47.327448    8082 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:37:47.327541    8082 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:37:47.335388    8082 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 22:37:47.345988    8082 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 22:37:47.350079    8082 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 22:37:47.350151    8082 kubeadm.go:404] StartCluster: {Name:addons-277112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-277112 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:37:47.350261    8082 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 22:37:47.368946    8082 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 22:37:47.378618    8082 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 22:37:47.388132    8082 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1218 22:37:47.388209    8082 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 22:37:47.397770    8082 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 22:37:47.397803    8082 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 22:37:47.450114    8082 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1218 22:37:47.450261    8082 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 22:37:47.506797    8082 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1218 22:37:47.506877    8082 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1218 22:37:47.506924    8082 kubeadm.go:322] OS: Linux
	I1218 22:37:47.506975    8082 kubeadm.go:322] CGROUPS_CPU: enabled
	I1218 22:37:47.507030    8082 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1218 22:37:47.507081    8082 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1218 22:37:47.507133    8082 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1218 22:37:47.507186    8082 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1218 22:37:47.507238    8082 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1218 22:37:47.507287    8082 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1218 22:37:47.507339    8082 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1218 22:37:47.507390    8082 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1218 22:37:47.584579    8082 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 22:37:47.584731    8082 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 22:37:47.584882    8082 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 22:37:47.912320    8082 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 22:37:47.917319    8082 out.go:204]   - Generating certificates and keys ...
	I1218 22:37:47.917494    8082 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 22:37:47.917598    8082 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 22:37:48.970427    8082 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 22:37:49.827358    8082 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 22:37:50.581862    8082 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 22:37:51.158398    8082 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 22:37:51.577527    8082 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 22:37:51.577750    8082 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-277112 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 22:37:52.009939    8082 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 22:37:52.010211    8082 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-277112 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 22:37:52.185929    8082 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 22:37:52.353308    8082 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 22:37:52.985343    8082 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 22:37:52.985656    8082 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 22:37:53.531484    8082 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 22:37:53.688693    8082 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 22:37:53.980899    8082 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 22:37:54.625983    8082 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 22:37:54.626747    8082 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 22:37:54.631197    8082 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 22:37:54.633741    8082 out.go:204]   - Booting up control plane ...
	I1218 22:37:54.633837    8082 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 22:37:54.633923    8082 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 22:37:54.634690    8082 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 22:37:54.648906    8082 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 22:37:54.649617    8082 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 22:37:54.649830    8082 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 22:37:54.752192    8082 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 22:38:03.254360    8082 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502266 seconds
	I1218 22:38:03.254480    8082 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 22:38:03.266423    8082 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 22:38:03.790664    8082 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 22:38:03.790851    8082 kubeadm.go:322] [mark-control-plane] Marking the node addons-277112 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1218 22:38:04.301379    8082 kubeadm.go:322] [bootstrap-token] Using token: hz46xy.7wtiubtd5ox8gpk6
	I1218 22:38:04.303635    8082 out.go:204]   - Configuring RBAC rules ...
	I1218 22:38:04.303752    8082 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 22:38:04.307887    8082 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 22:38:04.315032    8082 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 22:38:04.318304    8082 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 22:38:04.324278    8082 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 22:38:04.327504    8082 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 22:38:04.339806    8082 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 22:38:04.570602    8082 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 22:38:04.713200    8082 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 22:38:04.716022    8082 kubeadm.go:322] 
	I1218 22:38:04.716096    8082 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 22:38:04.716103    8082 kubeadm.go:322] 
	I1218 22:38:04.716176    8082 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 22:38:04.716181    8082 kubeadm.go:322] 
	I1218 22:38:04.716205    8082 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 22:38:04.716261    8082 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 22:38:04.716309    8082 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 22:38:04.716314    8082 kubeadm.go:322] 
	I1218 22:38:04.716365    8082 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1218 22:38:04.716369    8082 kubeadm.go:322] 
	I1218 22:38:04.716414    8082 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1218 22:38:04.716419    8082 kubeadm.go:322] 
	I1218 22:38:04.716468    8082 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 22:38:04.716612    8082 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 22:38:04.716683    8082 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 22:38:04.716688    8082 kubeadm.go:322] 
	I1218 22:38:04.716768    8082 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 22:38:04.716847    8082 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 22:38:04.716852    8082 kubeadm.go:322] 
	I1218 22:38:04.716931    8082 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hz46xy.7wtiubtd5ox8gpk6 \
	I1218 22:38:04.717029    8082 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c9557dd7f437ed4cd7be329c62a8a7ae9cbf7c397b86c56a297c9c177867a738 \
	I1218 22:38:04.717062    8082 kubeadm.go:322] 	--control-plane 
	I1218 22:38:04.717067    8082 kubeadm.go:322] 
	I1218 22:38:04.717151    8082 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 22:38:04.717157    8082 kubeadm.go:322] 
	I1218 22:38:04.717235    8082 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hz46xy.7wtiubtd5ox8gpk6 \
	I1218 22:38:04.717331    8082 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c9557dd7f437ed4cd7be329c62a8a7ae9cbf7c397b86c56a297c9c177867a738 
	I1218 22:38:04.720073    8082 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 22:38:04.720181    8082 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 22:38:04.720196    8082 cni.go:84] Creating CNI manager for ""
	I1218 22:38:04.720216    8082 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 22:38:04.722335    8082 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1218 22:38:04.724081    8082 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1218 22:38:04.736450    8082 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1218 22:38:04.778860    8082 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 22:38:04.778992    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:04.779064    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=addons-277112 minikube.k8s.io/updated_at=2023_12_18T22_38_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:05.108169    8082 ops.go:34] apiserver oom_adj: -16
	I1218 22:38:05.108254    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:05.608346    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:06.108717    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:06.608326    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:07.109047    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:07.608969    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:08.109124    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:08.608489    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:09.109299    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:09.608801    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:10.108895    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:10.608383    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:11.108521    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:11.608771    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:12.109148    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:12.609357    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:13.108417    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:13.608393    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:14.108943    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:14.608702    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:15.109295    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:15.608771    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:16.108407    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:16.608391    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:17.108372    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:17.608563    8082 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:38:17.737852    8082 kubeadm.go:1088] duration metric: took 12.958903707s to wait for elevateKubeSystemPrivileges.
	I1218 22:38:17.737876    8082 kubeadm.go:406] StartCluster complete in 30.387729969s
	I1218 22:38:17.737892    8082 settings.go:142] acquiring lock: {Name:mkea14aac8a39c6a2ed200653e9b07ad1584eac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:38:17.737977    8082 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:38:17.738408    8082 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/kubeconfig: {Name:mkf844e795bd9b2be73b36e3c1c24ce0924bf634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:38:17.738666    8082 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 22:38:17.738914    8082 config.go:182] Loaded profile config "addons-277112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 22:38:17.738994    8082 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1218 22:38:17.739081    8082 addons.go:69] Setting volumesnapshots=true in profile "addons-277112"
	I1218 22:38:17.739095    8082 addons.go:231] Setting addon volumesnapshots=true in "addons-277112"
	I1218 22:38:17.739126    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.739548    8082 addons.go:69] Setting ingress-dns=true in profile "addons-277112"
	I1218 22:38:17.739562    8082 addons.go:231] Setting addon ingress-dns=true in "addons-277112"
	I1218 22:38:17.739592    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.739965    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.740200    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.740599    8082 addons.go:69] Setting cloud-spanner=true in profile "addons-277112"
	I1218 22:38:17.740615    8082 addons.go:231] Setting addon cloud-spanner=true in "addons-277112"
	I1218 22:38:17.740643    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.741020    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.742965    8082 addons.go:69] Setting inspektor-gadget=true in profile "addons-277112"
	I1218 22:38:17.742989    8082 addons.go:231] Setting addon inspektor-gadget=true in "addons-277112"
	I1218 22:38:17.743019    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.743388    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.743744    8082 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-277112"
	I1218 22:38:17.743780    8082 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-277112"
	I1218 22:38:17.743818    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.744195    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.751017    8082 addons.go:69] Setting metrics-server=true in profile "addons-277112"
	I1218 22:38:17.751039    8082 addons.go:231] Setting addon metrics-server=true in "addons-277112"
	I1218 22:38:17.751084    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.751570    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.764346    8082 addons.go:69] Setting default-storageclass=true in profile "addons-277112"
	I1218 22:38:17.764426    8082 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-277112"
	I1218 22:38:17.764779    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.766291    8082 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-277112"
	I1218 22:38:17.766315    8082 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-277112"
	I1218 22:38:17.766355    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.766741    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.772884    8082 addons.go:69] Setting gcp-auth=true in profile "addons-277112"
	I1218 22:38:17.772908    8082 mustload.go:65] Loading cluster: addons-277112
	I1218 22:38:17.773043    8082 addons.go:69] Setting registry=true in profile "addons-277112"
	I1218 22:38:17.773061    8082 addons.go:231] Setting addon registry=true in "addons-277112"
	I1218 22:38:17.773079    8082 config.go:182] Loaded profile config "addons-277112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 22:38:17.773109    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.773295    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.773502    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.773628    8082 addons.go:69] Setting storage-provisioner=true in profile "addons-277112"
	I1218 22:38:17.773646    8082 addons.go:231] Setting addon storage-provisioner=true in "addons-277112"
	I1218 22:38:17.773673    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.790469    8082 addons.go:69] Setting ingress=true in profile "addons-277112"
	I1218 22:38:17.790508    8082 addons.go:231] Setting addon ingress=true in "addons-277112"
	I1218 22:38:17.790564    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:17.790999    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.791984    8082 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-277112"
	I1218 22:38:17.792011    8082 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-277112"
	I1218 22:38:17.794924    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.857147    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:17.954187    8082 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1218 22:38:17.963285    8082 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 22:38:17.963306    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1218 22:38:17.963416    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:17.973160    8082 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1218 22:38:17.984426    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1218 22:38:17.988327    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1218 22:38:17.992496    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1218 22:38:17.996916    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1218 22:38:18.020340    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1218 22:38:18.022442    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1218 22:38:18.024121    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1218 22:38:18.031051    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1218 22:38:18.039299    8082 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1218 22:38:18.039318    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1218 22:38:18.039369    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.057614    8082 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-277112"
	I1218 22:38:18.057656    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:18.058114    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:18.039155    8082 addons.go:231] Setting addon default-storageclass=true in "addons-277112"
	I1218 22:38:18.061550    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:18.062050    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:18.076407    8082 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1218 22:38:18.001160    8082 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1218 22:38:18.001145    8082 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1218 22:38:18.039213    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:18.001154    8082 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1218 22:38:18.084402    8082 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1218 22:38:18.084358    8082 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 22:38:18.085813    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1218 22:38:18.085823    8082 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1218 22:38:18.092275    8082 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1218 22:38:18.092287    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1218 22:38:18.096408    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.096419    8082 out.go:177]   - Using image docker.io/registry:2.8.3
	I1218 22:38:18.106884    8082 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1218 22:38:18.106908    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1218 22:38:18.106987    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.098878    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1218 22:38:18.115158    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.098945    8082 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1218 22:38:18.123353    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1218 22:38:18.123425    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.142506    8082 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 22:38:18.144520    8082 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 22:38:18.099202    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.149661    8082 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:38:18.159048    8082 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 22:38:18.159075    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 22:38:18.159141    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.177937    8082 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 22:38:18.177956    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 22:38:18.178014    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.150012    8082 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 22:38:18.181457    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1218 22:38:18.181529    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.196296    8082 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1218 22:38:18.202295    8082 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1218 22:38:18.202317    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1218 22:38:18.202376    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.204568    8082 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 22:38:18.204873    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.260375    8082 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-277112" context rescaled to 1 replicas
	I1218 22:38:18.260504    8082 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 22:38:18.279960    8082 out.go:177] * Verifying Kubernetes components...
	I1218 22:38:18.285113    8082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 22:38:18.317278    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.355470    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.368130    8082 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1218 22:38:18.367375    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.375344    8082 out.go:177]   - Using image docker.io/busybox:stable
	I1218 22:38:18.375675    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.378615    8082 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 22:38:18.378649    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1218 22:38:18.378731    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:18.434009    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.439462    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.443629    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.459903    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.481972    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.484167    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:18.487566    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:19.131583    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1218 22:38:19.195770    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1218 22:38:19.276363    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1218 22:38:19.328977    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 22:38:19.379419    8082 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1218 22:38:19.379490    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1218 22:38:19.417230    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1218 22:38:19.422502    8082 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1218 22:38:19.422565    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1218 22:38:19.463356    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 22:38:19.466261    8082 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1218 22:38:19.466310    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1218 22:38:19.470119    8082 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1218 22:38:19.470170    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1218 22:38:19.514950    8082 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1218 22:38:19.515020    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1218 22:38:19.549301    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1218 22:38:19.564480    8082 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1218 22:38:19.564573    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1218 22:38:19.644628    8082 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1218 22:38:19.644689    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1218 22:38:19.674709    8082 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1218 22:38:19.674780    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1218 22:38:19.721249    8082 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1218 22:38:19.721320    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1218 22:38:19.824253    8082 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1218 22:38:19.824312    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1218 22:38:19.887756    8082 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1218 22:38:19.887815    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1218 22:38:20.002365    8082 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1218 22:38:20.002441    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1218 22:38:20.098502    8082 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1218 22:38:20.098524    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1218 22:38:20.118226    8082 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 22:38:20.118246    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1218 22:38:20.314746    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1218 22:38:20.332960    8082 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1218 22:38:20.332985    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1218 22:38:20.392462    8082 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1218 22:38:20.392486    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1218 22:38:20.424405    8082 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1218 22:38:20.424429    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1218 22:38:20.576583    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1218 22:38:20.613919    8082 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1218 22:38:20.613947    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1218 22:38:20.628064    8082 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1218 22:38:20.628088    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1218 22:38:20.685925    8082 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1218 22:38:20.685949    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1218 22:38:20.824225    8082 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 22:38:20.824249    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1218 22:38:20.827490    8082 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1218 22:38:20.827513    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1218 22:38:20.944562    8082 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1218 22:38:20.944588    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1218 22:38:21.023025    8082 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1218 22:38:21.023051    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1218 22:38:21.041440    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 22:38:21.244495    8082 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 22:38:21.244521    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1218 22:38:21.326765    8082 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1218 22:38:21.326791    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1218 22:38:21.481834    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1218 22:38:21.487765    8082 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 22:38:21.487786    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1218 22:38:21.547136    8082 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.261947505s)
	I1218 22:38:21.547943    8082 node_ready.go:35] waiting up to 6m0s for node "addons-277112" to be "Ready" ...
	I1218 22:38:21.548156    8082 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.343559143s)
	I1218 22:38:21.548178    8082 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1218 22:38:21.551347    8082 node_ready.go:49] node "addons-277112" has status "Ready":"True"
	I1218 22:38:21.551375    8082 node_ready.go:38] duration metric: took 3.408498ms waiting for node "addons-277112" to be "Ready" ...
	I1218 22:38:21.551385    8082 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 22:38:21.561329    8082 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cs7xm" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.567232    8082 pod_ready.go:92] pod "coredns-5dd5756b68-cs7xm" in "kube-system" namespace has status "Ready":"True"
	I1218 22:38:21.567296    8082 pod_ready.go:81] duration metric: took 5.898217ms waiting for pod "coredns-5dd5756b68-cs7xm" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.567322    8082 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fz6mk" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.576515    8082 pod_ready.go:92] pod "coredns-5dd5756b68-fz6mk" in "kube-system" namespace has status "Ready":"True"
	I1218 22:38:21.576586    8082 pod_ready.go:81] duration metric: took 9.243289ms waiting for pod "coredns-5dd5756b68-fz6mk" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.576612    8082 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-277112" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.582327    8082 pod_ready.go:92] pod "etcd-addons-277112" in "kube-system" namespace has status "Ready":"True"
	I1218 22:38:21.582391    8082 pod_ready.go:81] duration metric: took 5.757852ms waiting for pod "etcd-addons-277112" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.582416    8082 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-277112" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.588255    8082 pod_ready.go:92] pod "kube-apiserver-addons-277112" in "kube-system" namespace has status "Ready":"True"
	I1218 22:38:21.588318    8082 pod_ready.go:81] duration metric: took 5.881881ms waiting for pod "kube-apiserver-addons-277112" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.588342    8082 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-277112" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.821369    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1218 22:38:21.951188    8082 pod_ready.go:92] pod "kube-controller-manager-addons-277112" in "kube-system" namespace has status "Ready":"True"
	I1218 22:38:21.951213    8082 pod_ready.go:81] duration metric: took 362.85022ms waiting for pod "kube-controller-manager-addons-277112" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:21.951225    8082 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqjq4" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:22.351445    8082 pod_ready.go:92] pod "kube-proxy-kqjq4" in "kube-system" namespace has status "Ready":"True"
	I1218 22:38:22.351468    8082 pod_ready.go:81] duration metric: took 400.234716ms waiting for pod "kube-proxy-kqjq4" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:22.351480    8082 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-277112" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:22.736168    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.604504133s)
	I1218 22:38:22.736223    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.54039804s)
	I1218 22:38:22.736251    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.459821773s)
	I1218 22:38:22.751670    8082 pod_ready.go:92] pod "kube-scheduler-addons-277112" in "kube-system" namespace has status "Ready":"True"
	I1218 22:38:22.751694    8082 pod_ready.go:81] duration metric: took 400.206802ms waiting for pod "kube-scheduler-addons-277112" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:22.751705    8082 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pw89s" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:23.794355    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.465290084s)
	I1218 22:38:24.705328    8082 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1218 22:38:24.705433    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:24.742218    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:24.855234    8082 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pw89s" in "kube-system" namespace has status "Ready":"False"
	I1218 22:38:25.501587    8082 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1218 22:38:25.732310    8082 addons.go:231] Setting addon gcp-auth=true in "addons-277112"
	I1218 22:38:25.732357    8082 host.go:66] Checking if "addons-277112" exists ...
	I1218 22:38:25.732820    8082 cli_runner.go:164] Run: docker container inspect addons-277112 --format={{.State.Status}}
	I1218 22:38:25.759307    8082 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1218 22:38:25.759357    8082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-277112
	I1218 22:38:25.805631    8082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/addons-277112/id_rsa Username:docker}
	I1218 22:38:26.957119    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.53980497s)
	I1218 22:38:26.957196    8082 addons.go:467] Verifying addon ingress=true in "addons-277112"
	I1218 22:38:26.957248    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.642478328s)
	I1218 22:38:26.957270    8082 addons.go:467] Verifying addon registry=true in "addons-277112"
	I1218 22:38:26.959719    8082 out.go:177] * Verifying registry addon...
	I1218 22:38:26.957210    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.407850344s)
	I1218 22:38:26.957161    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.493713238s)
	I1218 22:38:26.957613    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.381001231s)
	I1218 22:38:26.957701    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.916230322s)
	I1218 22:38:26.957754    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.47589044s)
	I1218 22:38:26.962004    8082 out.go:177] * Verifying ingress addon...
	W1218 22:38:26.962285    8082 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 22:38:26.962298    8082 addons.go:467] Verifying addon metrics-server=true in "addons-277112"
	I1218 22:38:26.965452    8082 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1218 22:38:26.968119    8082 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1218 22:38:26.965529    8082 retry.go:31] will retry after 264.445294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1218 22:38:26.984958    8082 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1218 22:38:26.984981    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:26.999297    8082 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1218 22:38:26.999360    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1218 22:38:27.005572    8082 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1218 22:38:27.233425    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1218 22:38:27.257960    8082 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pw89s" in "kube-system" namespace has status "Ready":"False"
	I1218 22:38:27.470863    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:27.473052    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:27.973013    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:27.975219    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:28.471651    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:28.475932    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:28.885935    8082 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.126597625s)
	I1218 22:38:28.890531    8082 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1218 22:38:28.886652    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.065241515s)
	I1218 22:38:28.890633    8082 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-277112"
	I1218 22:38:28.892780    8082 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1218 22:38:28.894799    8082 out.go:177] * Verifying csi-hostpath-driver addon...
	I1218 22:38:28.897632    8082 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1218 22:38:28.894959    8082 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1218 22:38:28.897825    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1218 22:38:28.903145    8082 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1218 22:38:28.903191    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:28.963022    8082 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1218 22:38:28.963083    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1218 22:38:28.970919    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:28.974995    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:29.004613    8082 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 22:38:29.004681    8082 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1218 22:38:29.068391    8082 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1218 22:38:29.258439    8082 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pw89s" in "kube-system" namespace has status "Ready":"False"
	I1218 22:38:29.403477    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:29.472213    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:29.474920    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:29.635131    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.401623702s)
	I1218 22:38:29.904440    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:29.971196    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:29.975196    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:30.417926    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:30.446122    8082 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.37763649s)
	I1218 22:38:30.448862    8082 addons.go:467] Verifying addon gcp-auth=true in "addons-277112"
	I1218 22:38:30.451002    8082 out.go:177] * Verifying gcp-auth addon...
	I1218 22:38:30.454162    8082 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1218 22:38:30.460493    8082 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1218 22:38:30.460520    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:30.470311    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:30.473152    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:30.904116    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:30.957617    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:30.970415    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:30.973751    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:31.403804    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:31.458925    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:31.480359    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:31.481225    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:31.757786    8082 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-pw89s" in "kube-system" namespace has status "Ready":"False"
	I1218 22:38:31.903028    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:31.958118    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:31.971282    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:31.973466    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:32.259505    8082 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-pw89s" in "kube-system" namespace has status "Ready":"True"
	I1218 22:38:32.259528    8082 pod_ready.go:81] duration metric: took 9.507815458s waiting for pod "nvidia-device-plugin-daemonset-pw89s" in "kube-system" namespace to be "Ready" ...
	I1218 22:38:32.259538    8082 pod_ready.go:38] duration metric: took 10.708142483s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 22:38:32.259555    8082 api_server.go:52] waiting for apiserver process to appear ...
	I1218 22:38:32.259610    8082 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 22:38:32.276355    8082 api_server.go:72] duration metric: took 14.015742983s to wait for apiserver process to appear ...
	I1218 22:38:32.276387    8082 api_server.go:88] waiting for apiserver healthz status ...
	I1218 22:38:32.276406    8082 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1218 22:38:32.285065    8082 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1218 22:38:32.286485    8082 api_server.go:141] control plane version: v1.28.4
	I1218 22:38:32.286542    8082 api_server.go:131] duration metric: took 10.148053ms to wait for apiserver health ...
	I1218 22:38:32.286565    8082 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 22:38:32.298588    8082 system_pods.go:59] 17 kube-system pods found
	I1218 22:38:32.298644    8082 system_pods.go:61] "coredns-5dd5756b68-cs7xm" [8a4a5a89-fc15-4e2c-9a69-c71fbd426468] Running
	I1218 22:38:32.298667    8082 system_pods.go:61] "csi-hostpath-attacher-0" [77476268-a6ed-40cd-8667-a4dac26063a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1218 22:38:32.298727    8082 system_pods.go:61] "csi-hostpath-resizer-0" [9fe18eda-92b4-4088-9884-74e79b59333b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1218 22:38:32.298754    8082 system_pods.go:61] "csi-hostpathplugin-p9f77" [5ced9ed1-b5a5-4d4f-97e6-da7a194508a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1218 22:38:32.298776    8082 system_pods.go:61] "etcd-addons-277112" [75a25d71-e09c-4f6e-9e46-5ca8e84c8385] Running
	I1218 22:38:32.298797    8082 system_pods.go:61] "kube-apiserver-addons-277112" [3670ec6f-b6e6-4b13-acdf-d5631cdb52f6] Running
	I1218 22:38:32.298826    8082 system_pods.go:61] "kube-controller-manager-addons-277112" [7ec989b0-1568-49ce-bac4-8d1cd0318219] Running
	I1218 22:38:32.298850    8082 system_pods.go:61] "kube-ingress-dns-minikube" [cde7c745-7e65-4f54-bef3-7f3b535d6477] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 22:38:32.298872    8082 system_pods.go:61] "kube-proxy-kqjq4" [e71604c6-78a2-4e43-bf88-10144900eb8e] Running
	I1218 22:38:32.298893    8082 system_pods.go:61] "kube-scheduler-addons-277112" [684dae09-044a-4f0b-a9fe-caf75e448237] Running
	I1218 22:38:32.298927    8082 system_pods.go:61] "metrics-server-7c66d45ddc-2sqdc" [7804eeb4-a7ff-4d3b-926f-08929bbf85cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 22:38:32.298952    8082 system_pods.go:61] "nvidia-device-plugin-daemonset-pw89s" [59ffba20-0b33-407f-bda7-71147794901d] Running
	I1218 22:38:32.298976    8082 system_pods.go:61] "registry-2jt27" [fde55396-40c0-4d55-b4b6-aea03fafe5c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 22:38:32.298998    8082 system_pods.go:61] "registry-proxy-xqcw7" [2e273129-0475-446b-8f43-0a9765d21350] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 22:38:32.299034    8082 system_pods.go:61] "snapshot-controller-58dbcc7b99-5qtxt" [ec600ed4-b77e-4eb9-a9d2-8c09ea3d8e9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 22:38:32.299061    8082 system_pods.go:61] "snapshot-controller-58dbcc7b99-tnx2p" [8736fb7a-85a3-4c90-84b5-fb8f643a6a3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 22:38:32.299095    8082 system_pods.go:61] "storage-provisioner" [02837d34-98b6-4bab-bd7a-b8a8242a5620] Running
	I1218 22:38:32.299118    8082 system_pods.go:74] duration metric: took 12.53534ms to wait for pod list to return data ...
	I1218 22:38:32.299140    8082 default_sa.go:34] waiting for default service account to be created ...
	I1218 22:38:32.301593    8082 default_sa.go:45] found service account: "default"
	I1218 22:38:32.301609    8082 default_sa.go:55] duration metric: took 2.451452ms for default service account to be created ...
	I1218 22:38:32.301616    8082 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 22:38:32.312164    8082 system_pods.go:86] 17 kube-system pods found
	I1218 22:38:32.312214    8082 system_pods.go:89] "coredns-5dd5756b68-cs7xm" [8a4a5a89-fc15-4e2c-9a69-c71fbd426468] Running
	I1218 22:38:32.312238    8082 system_pods.go:89] "csi-hostpath-attacher-0" [77476268-a6ed-40cd-8667-a4dac26063a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1218 22:38:32.312263    8082 system_pods.go:89] "csi-hostpath-resizer-0" [9fe18eda-92b4-4088-9884-74e79b59333b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1218 22:38:32.312299    8082 system_pods.go:89] "csi-hostpathplugin-p9f77" [5ced9ed1-b5a5-4d4f-97e6-da7a194508a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1218 22:38:32.312327    8082 system_pods.go:89] "etcd-addons-277112" [75a25d71-e09c-4f6e-9e46-5ca8e84c8385] Running
	I1218 22:38:32.312347    8082 system_pods.go:89] "kube-apiserver-addons-277112" [3670ec6f-b6e6-4b13-acdf-d5631cdb52f6] Running
	I1218 22:38:32.312424    8082 system_pods.go:89] "kube-controller-manager-addons-277112" [7ec989b0-1568-49ce-bac4-8d1cd0318219] Running
	I1218 22:38:32.312454    8082 system_pods.go:89] "kube-ingress-dns-minikube" [cde7c745-7e65-4f54-bef3-7f3b535d6477] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1218 22:38:32.312473    8082 system_pods.go:89] "kube-proxy-kqjq4" [e71604c6-78a2-4e43-bf88-10144900eb8e] Running
	I1218 22:38:32.312496    8082 system_pods.go:89] "kube-scheduler-addons-277112" [684dae09-044a-4f0b-a9fe-caf75e448237] Running
	I1218 22:38:32.312546    8082 system_pods.go:89] "metrics-server-7c66d45ddc-2sqdc" [7804eeb4-a7ff-4d3b-926f-08929bbf85cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1218 22:38:32.312598    8082 system_pods.go:89] "nvidia-device-plugin-daemonset-pw89s" [59ffba20-0b33-407f-bda7-71147794901d] Running
	I1218 22:38:32.312622    8082 system_pods.go:89] "registry-2jt27" [fde55396-40c0-4d55-b4b6-aea03fafe5c2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1218 22:38:32.312646    8082 system_pods.go:89] "registry-proxy-xqcw7" [2e273129-0475-446b-8f43-0a9765d21350] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1218 22:38:32.312679    8082 system_pods.go:89] "snapshot-controller-58dbcc7b99-5qtxt" [ec600ed4-b77e-4eb9-a9d2-8c09ea3d8e9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 22:38:32.312705    8082 system_pods.go:89] "snapshot-controller-58dbcc7b99-tnx2p" [8736fb7a-85a3-4c90-84b5-fb8f643a6a3e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1218 22:38:32.312725    8082 system_pods.go:89] "storage-provisioner" [02837d34-98b6-4bab-bd7a-b8a8242a5620] Running
	I1218 22:38:32.312745    8082 system_pods.go:126] duration metric: took 11.122996ms to wait for k8s-apps to be running ...
	I1218 22:38:32.312777    8082 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 22:38:32.312846    8082 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 22:38:32.328269    8082 system_svc.go:56] duration metric: took 15.496539ms WaitForService to wait for kubelet.
	I1218 22:38:32.328332    8082 kubeadm.go:581] duration metric: took 14.067725648s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 22:38:32.328382    8082 node_conditions.go:102] verifying NodePressure condition ...
	I1218 22:38:32.331621    8082 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 22:38:32.331680    8082 node_conditions.go:123] node cpu capacity is 2
	I1218 22:38:32.331704    8082 node_conditions.go:105] duration metric: took 3.297941ms to run NodePressure ...
	I1218 22:38:32.331750    8082 start.go:228] waiting for startup goroutines ...
	I1218 22:38:32.403852    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:32.458231    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:32.472017    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:32.473862    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:32.904041    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:32.958891    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:32.971804    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:32.973919    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:33.404686    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:33.457786    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:33.471140    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:33.474999    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:33.903088    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:33.958487    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:33.971079    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:33.973356    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:34.403006    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:34.458779    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:34.471831    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:34.474701    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:34.903197    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:34.958106    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:34.969933    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:34.973221    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:35.404110    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:35.458667    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:35.470520    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:35.473433    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:35.903386    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:35.958585    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:35.981787    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:35.982867    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:36.403132    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:36.458031    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:36.471842    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:36.473273    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:36.903248    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:36.958463    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:36.971557    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:36.973710    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:37.403179    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:37.458341    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:37.470240    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:37.472757    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:37.903259    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:37.959049    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:37.972319    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:37.975346    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:38.403791    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:38.458333    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:38.471975    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:38.474525    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:38.903434    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:38.957949    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:38.970039    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:38.972949    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:39.403187    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:39.457924    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:39.471067    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:39.472685    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:39.903254    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:39.957722    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:39.969966    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:39.974202    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:40.402741    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:40.458179    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:40.470070    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:40.473595    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:40.902816    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:40.958335    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:40.972313    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:40.973339    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:41.403120    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:41.458369    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:41.472667    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:41.473580    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:41.903368    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:41.958171    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:41.970331    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:41.972943    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:42.403301    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:42.458924    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:42.472307    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:42.474823    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:42.903820    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:42.958019    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:42.971827    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:42.973999    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:43.405933    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:43.461555    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:43.471537    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:43.474202    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:43.903746    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:43.958345    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:43.980430    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:43.985359    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:44.403521    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:44.457966    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:44.480751    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:44.481509    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:44.907692    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:44.958555    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:44.972991    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:44.974695    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:45.402970    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:45.458823    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:45.472141    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:45.473785    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:45.903872    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:45.959239    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:45.971603    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:45.974485    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:46.403886    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:46.457958    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:46.472601    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:46.472964    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:46.903862    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:46.958716    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:46.972331    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:46.975416    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:47.403861    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:47.458174    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:47.471006    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:47.472867    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:47.903282    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:47.957706    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:47.971616    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:47.974285    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:48.408047    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:48.458543    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:48.470429    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:48.473093    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:48.903286    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:48.958257    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:48.971120    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:48.973900    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:49.404363    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:49.458127    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:49.471934    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1218 22:38:49.474904    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:49.903988    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:49.958760    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:49.971550    8082 kapi.go:107] duration metric: took 23.006084662s to wait for kubernetes.io/minikube-addons=registry ...
	I1218 22:38:49.973262    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:50.404017    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:50.458434    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:50.473201    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:50.903928    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:50.957407    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:50.973145    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:51.404456    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:51.458129    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:51.472612    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:51.903204    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:51.957735    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:51.973469    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:52.403419    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:52.458182    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:52.472617    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:52.904067    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:52.958930    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:52.973195    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:53.407525    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:53.458642    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:53.489236    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:53.903958    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:53.958614    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:53.972877    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:54.404009    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:54.458508    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:54.472610    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:54.903213    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:54.958121    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:54.972525    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:55.403540    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:55.457777    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:55.473026    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:55.903834    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:55.959193    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:55.977842    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:56.403377    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:56.458199    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:56.472780    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:56.903070    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:56.958605    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:56.972628    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:57.403094    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:57.457852    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:57.473198    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:57.909725    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:57.958764    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:57.973882    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:58.403470    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:58.463101    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:58.478580    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:58.903785    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:58.958191    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:58.973817    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:59.404198    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:59.458570    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:59.473090    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:38:59.903563    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:38:59.957766    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:38:59.972511    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:00.405690    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:00.458353    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:00.473811    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:00.904048    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:00.957403    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:00.973129    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:01.404007    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:01.461902    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:01.473079    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:01.905859    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:01.960286    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:01.972421    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:02.403670    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:02.458028    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:02.473110    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:02.908353    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:02.959102    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:02.973689    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:03.403638    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:03.458528    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:03.472883    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:03.903715    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:03.958679    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:03.977391    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:04.403245    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:04.461541    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:04.480320    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:04.902709    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:04.958784    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:04.973382    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:05.403432    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:05.460784    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:05.474855    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:05.903208    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:05.957564    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:05.972652    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:06.403968    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:06.457821    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:06.473279    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:06.902958    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:06.958857    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:06.972907    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:07.403981    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:07.458005    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:07.472975    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:07.904432    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:07.958153    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:07.972941    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:08.404203    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:08.457761    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:08.473273    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:08.903946    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:08.957758    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:08.973093    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:09.403405    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:09.460314    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:09.472938    8082 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1218 22:39:09.904358    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:09.958369    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:09.973300    8082 kapi.go:107] duration metric: took 43.005177366s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1218 22:39:10.404048    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:10.457720    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:10.902710    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:10.958167    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:11.403492    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:11.457894    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:11.903772    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:11.958598    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:12.403734    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:12.458097    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:12.904448    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:12.958116    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:13.404625    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:13.458674    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:13.903641    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:13.958117    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:14.403405    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:14.458431    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:14.903567    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:14.958220    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:15.410799    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:15.457834    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:15.904698    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:15.957973    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:16.403551    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:16.458017    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:16.903636    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:16.958493    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:17.412376    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:17.459559    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:17.903974    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1218 22:39:17.960493    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:18.404224    8082 kapi.go:107] duration metric: took 49.506585845s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1218 22:39:18.458109    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:18.957905    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:19.458118    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:19.958467    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:20.458156    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:20.957669    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:21.458290    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:21.958098    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:22.458531    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:22.957420    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:23.458329    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:23.958200    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:24.458129    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:24.957784    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:25.457921    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:25.958486    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:26.458685    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:26.957823    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:27.457997    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:27.958172    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:28.458002    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:28.957508    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:29.458309    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:29.958070    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:30.458216    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:30.958032    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:31.457664    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:31.957313    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:32.458043    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:32.958520    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:33.458452    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:33.958512    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:34.458294    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:34.958256    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:35.458722    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:35.958258    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:36.458016    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:36.957776    8082 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1218 22:39:37.468749    8082 kapi.go:107] duration metric: took 1m7.014581223s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1218 22:39:37.471207    8082 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-277112 cluster.
	I1218 22:39:37.473534    8082 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1218 22:39:37.475472    8082 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1218 22:39:37.477856    8082 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1218 22:39:37.481707    8082 addons.go:502] enable addons completed in 1m19.742706366s: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1218 22:39:37.481758    8082 start.go:233] waiting for cluster config update ...
	I1218 22:39:37.481776    8082 start.go:242] writing updated cluster config ...
	I1218 22:39:37.482032    8082 ssh_runner.go:195] Run: rm -f paused
	I1218 22:39:37.827879    8082 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I1218 22:39:37.830724    8082 out.go:177] * Done! kubectl is now configured to use "addons-277112" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Dec 18 22:40:22 addons-277112 dockerd[1102]: time="2023-12-18T22:40:22.943909823Z" level=info msg="ignoring event" container=45d9ec8e90c7544e775369586398a44f771a5eea801c00e53d638afa3d69af12 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:22 addons-277112 dockerd[1102]: time="2023-12-18T22:40:22.995356361Z" level=info msg="ignoring event" container=8cd04c14337fed5223d35c23c27e6772328e20cbaa9bf506d854419056bc356a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:23 addons-277112 cri-dockerd[1312]: time="2023-12-18T22:40:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e2c010ab20477cd60627f259a80abd0348f38cfb97b297966299a847758bd56f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 18 22:40:25 addons-277112 cri-dockerd[1312]: time="2023-12-18T22:40:25Z" level=info msg="Stop pulling image gcr.io/google-samples/hello-app:1.0: Status: Downloaded newer image for gcr.io/google-samples/hello-app:1.0"
	Dec 18 22:40:25 addons-277112 dockerd[1102]: time="2023-12-18T22:40:25.378752722Z" level=info msg="ignoring event" container=bc68ea2b7203584976c5e6b3eb06dec02dea6111033a657c6244c751f0bbf9bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:26 addons-277112 dockerd[1102]: time="2023-12-18T22:40:26.060621790Z" level=info msg="ignoring event" container=1edfa572acf1303fdb1c7af64aabaddead70db40bfc5d7ce050e491502b9af6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:29 addons-277112 dockerd[1102]: time="2023-12-18T22:40:29.413800715Z" level=info msg="ignoring event" container=2ea51b7dd9e859f2653ba2fdc6dee55914079dae864f0bf657ff6ac3403abbb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:29 addons-277112 dockerd[1102]: time="2023-12-18T22:40:29.530877424Z" level=info msg="ignoring event" container=ea20e6741447d1a3b1925b17b89cfbca6a45604395fd393a2bc1c0d9383233d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:30 addons-277112 cri-dockerd[1312]: time="2023-12-18T22:40:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eae24145ac276da11e3d0232c139e997227dcd2435e91f0d5c164e11c941149b/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 18 22:40:30 addons-277112 dockerd[1102]: time="2023-12-18T22:40:30.332242983Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 18 22:40:30 addons-277112 cri-dockerd[1312]: time="2023-12-18T22:40:30Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 18 22:40:31 addons-277112 dockerd[1102]: time="2023-12-18T22:40:31.031992923Z" level=info msg="ignoring event" container=1fb80add24ae8f9383003868d946f17a5424a782fd158941a603586e0b025b1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:33 addons-277112 dockerd[1102]: time="2023-12-18T22:40:33.168908201Z" level=info msg="ignoring event" container=eae24145ac276da11e3d0232c139e997227dcd2435e91f0d5c164e11c941149b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:35 addons-277112 cri-dockerd[1312]: time="2023-12-18T22:40:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/430041cf2bf34f1db63577ddf531ac9455fb4e142a36ea4e730c08152b19c2d8/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 18 22:40:36 addons-277112 cri-dockerd[1312]: time="2023-12-18T22:40:36Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Dec 18 22:40:36 addons-277112 dockerd[1102]: time="2023-12-18T22:40:36.220097792Z" level=info msg="ignoring event" container=c0b0fda42da431adc96223ee65ab1cc26bd6a978fe24598bf041b789479d8560 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:38 addons-277112 dockerd[1102]: time="2023-12-18T22:40:38.369590123Z" level=info msg="ignoring event" container=430041cf2bf34f1db63577ddf531ac9455fb4e142a36ea4e730c08152b19c2d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:39 addons-277112 dockerd[1102]: time="2023-12-18T22:40:39.398084553Z" level=info msg="ignoring event" container=c4cf4a48502e5b32c14e10ad34324d68bab6134bc5327374061f3ad89b367c61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:40 addons-277112 cri-dockerd[1312]: time="2023-12-18T22:40:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6a6ab44900c3eb1822539523cb68c03e7ce21cd7a1dd4e367a2928de8db1ec56/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 18 22:40:40 addons-277112 dockerd[1102]: time="2023-12-18T22:40:40.546894325Z" level=info msg="ignoring event" container=448332be9c1851bd12a458c13dd448937b8b58e5cb88a06c0364c72ea5e56719 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:40 addons-277112 dockerd[1102]: time="2023-12-18T22:40:40.911753159Z" level=info msg="ignoring event" container=d1a7cb502412ea72d7e6abea6808a9e31e3d02c039e4e34fb42994423c3dbcf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:42 addons-277112 dockerd[1102]: time="2023-12-18T22:40:42.681724933Z" level=info msg="ignoring event" container=6a6ab44900c3eb1822539523cb68c03e7ce21cd7a1dd4e367a2928de8db1ec56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:44 addons-277112 dockerd[1102]: time="2023-12-18T22:40:44.110953791Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd
	Dec 18 22:40:44 addons-277112 dockerd[1102]: time="2023-12-18T22:40:44.184014344Z" level=info msg="ignoring event" container=098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:40:44 addons-277112 dockerd[1102]: time="2023-12-18T22:40:44.303115818Z" level=info msg="ignoring event" container=4437e3ee7ee6844e07f288fe3febcfa6605e9c6ae37261ed9ae1f33d900878fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d1a7cb502412e       dd1b12fcb6097                                                                                                                9 seconds ago        Exited              hello-world-app           2                   e2c010ab20477       hello-world-app-5d77478584-n8s5q
	448332be9c185       fc9db2894f4e4                                                                                                                9 seconds ago        Exited              helper-pod                0                   6a6ab44900c3e       helper-pod-delete-pvc-2c036aaf-9188-4922-a1b6-850c21e22b1b
	c0b0fda42da43       busybox@sha256:5c63a9b46e7139d2d5841462859edcbbf57f238af891b6096578e5894cfe5ae2                                              13 seconds ago       Exited              busybox                   0                   430041cf2bf34       test-local-path
	1fb80add24ae8       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              19 seconds ago       Exited              helper-pod                0                   eae24145ac276       helper-pod-create-pvc-2c036aaf-9188-4922-a1b6-850c21e22b1b
	2d20f61d2bebb       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                                35 seconds ago       Running             nginx                     0                   f4121d9c069f0       nginx
	e0546f3c9aec1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                  0                   01b186df093a7       gcp-auth-d4c87556c-g8cff
	4e77dab4686bf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              patch                     0                   ee3b32f50c99e       ingress-nginx-admission-patch-cpwwn
	29a5f3607c40f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              create                    0                   055b74f5d6308       ingress-nginx-admission-create-f86h7
	100bd0b2956c2       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       About a minute ago   Running             local-path-provisioner    0                   79b784e7acbdd       local-path-provisioner-78b46b4d5c-kff7w
	d423e4ae25edc       gcr.io/cloud-spanner-emulator/emulator@sha256:9ded3fac22d4d1c85ae51473e3876e2377f5179192fea664409db0fe87e05ece               2 minutes ago        Running             cloud-spanner-emulator    0                   3a427a3733e8e       cloud-spanner-emulator-5649c69bf6-jkbsv
	73696f22931f3       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner       0                   34632ed691da2       storage-provisioner
	e76b65b26cb2a       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                   0                   611ad63e7f2b7       coredns-5dd5756b68-cs7xm
	418eb7e108fd2       3ca3ca488cf13                                                                                                                2 minutes ago        Running             kube-proxy                0                   fba5e28531a91       kube-proxy-kqjq4
	df5e9951a8c9b       05c284c929889                                                                                                                2 minutes ago        Running             kube-scheduler            0                   ec4bc696656f0       kube-scheduler-addons-277112
	75c7072d77a18       9961cbceaf234                                                                                                                2 minutes ago        Running             kube-controller-manager   0                   70db3e397ef01       kube-controller-manager-addons-277112
	c2f5634744056       04b4c447bb9d4                                                                                                                2 minutes ago        Running             kube-apiserver            0                   afd64d1573b5a       kube-apiserver-addons-277112
	ecba945e0a2c6       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                      0                   777cce270a698       etcd-addons-277112
	
	* 
	* ==> coredns [e76b65b26cb2] <==
	* [INFO] 10.244.0.19:59679 - 22539 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081888s
	[INFO] 10.244.0.19:41966 - 27020 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002853706s
	[INFO] 10.244.0.19:59679 - 9645 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001036417s
	[INFO] 10.244.0.19:41966 - 2317 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001747111s
	[INFO] 10.244.0.19:59679 - 54818 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000993692s
	[INFO] 10.244.0.19:41966 - 43105 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000590512s
	[INFO] 10.244.0.19:59679 - 56890 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050454s
	[INFO] 10.244.0.19:56375 - 49895 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098758s
	[INFO] 10.244.0.19:56375 - 3130 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065149s
	[INFO] 10.244.0.19:56375 - 47693 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057519s
	[INFO] 10.244.0.19:56375 - 52900 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051537s
	[INFO] 10.244.0.19:56375 - 18999 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000101933s
	[INFO] 10.244.0.19:56375 - 47634 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067751s
	[INFO] 10.244.0.19:56375 - 61214 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001034537s
	[INFO] 10.244.0.19:56375 - 26953 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000780759s
	[INFO] 10.244.0.19:56375 - 55345 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047697s
	[INFO] 10.244.0.19:53598 - 34885 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000918s
	[INFO] 10.244.0.19:53598 - 47557 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076439s
	[INFO] 10.244.0.19:53598 - 14033 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060653s
	[INFO] 10.244.0.19:53598 - 24362 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051717s
	[INFO] 10.244.0.19:53598 - 5860 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005669s
	[INFO] 10.244.0.19:53598 - 45788 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056394s
	[INFO] 10.244.0.19:53598 - 35204 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00088407s
	[INFO] 10.244.0.19:53598 - 556 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000766875s
	[INFO] 10.244.0.19:53598 - 64498 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000052636s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-277112
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-277112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=addons-277112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T22_38_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-277112
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 22:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-277112
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 22:40:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 22:40:37 +0000   Mon, 18 Dec 2023 22:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 22:40:37 +0000   Mon, 18 Dec 2023 22:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 22:40:37 +0000   Mon, 18 Dec 2023 22:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 22:40:37 +0000   Mon, 18 Dec 2023 22:38:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-277112
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 f622d2905cfa4cdda6a48f2fe4c6af79
	  System UUID:                9f3254ab-81f2-49f2-b68a-c4b3850d3b27
	  Boot ID:                    90ea92e2-9dcb-495c-affc-7a21f948b8bd
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-jkbsv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  default                     hello-world-app-5d77478584-n8s5q           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  gcp-auth                    gcp-auth-d4c87556c-g8cff                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 coredns-5dd5756b68-cs7xm                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m32s
	  kube-system                 etcd-addons-277112                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-apiserver-addons-277112               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-controller-manager-addons-277112      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 kube-proxy-kqjq4                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-scheduler-addons-277112               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  local-path-storage          local-path-provisioner-78b46b4d5c-kff7w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m30s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m53s (x8 over 2m54s)  kubelet          Node addons-277112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x8 over 2m54s)  kubelet          Node addons-277112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x7 over 2m54s)  kubelet          Node addons-277112 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m45s                  kubelet          Node addons-277112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s                  kubelet          Node addons-277112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m45s                  kubelet          Node addons-277112 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m45s                  kubelet          Node addons-277112 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m45s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m35s                  kubelet          Node addons-277112 status is now: NodeReady
	  Normal  RegisteredNode           2m33s                  node-controller  Node addons-277112 event: Registered Node addons-277112 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec18 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015118] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.305533] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.495173] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [ecba945e0a2c] <==
	* {"level":"info","ts":"2023-12-18T22:37:56.838175Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-18T22:37:56.838188Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-18T22:37:56.838732Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T22:37:56.83877Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T22:37:56.838779Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T22:37:56.839049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-18T22:37:56.839126Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-18T22:37:57.323115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-18T22:37:57.323318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-18T22:37:57.32341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-18T22:37:57.323518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-18T22:37:57.323599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-18T22:37:57.324632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-18T22:37:57.324736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-18T22:37:57.328341Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T22:37:57.329056Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-277112 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-18T22:37:57.329343Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T22:37:57.329518Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T22:37:57.329864Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T22:37:57.332555Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T22:37:57.332641Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T22:37:57.333682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-18T22:37:57.341353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-18T22:37:57.348597Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-18T22:37:57.348685Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [e0546f3c9aec] <==
	* 2023/12/18 22:39:36 GCP Auth Webhook started!
	2023/12/18 22:39:49 Ready to marshal response ...
	2023/12/18 22:39:49 Ready to write response ...
	2023/12/18 22:39:50 Ready to marshal response ...
	2023/12/18 22:39:50 Ready to write response ...
	2023/12/18 22:40:05 Ready to marshal response ...
	2023/12/18 22:40:05 Ready to write response ...
	2023/12/18 22:40:12 Ready to marshal response ...
	2023/12/18 22:40:12 Ready to write response ...
	2023/12/18 22:40:23 Ready to marshal response ...
	2023/12/18 22:40:23 Ready to write response ...
	2023/12/18 22:40:29 Ready to marshal response ...
	2023/12/18 22:40:29 Ready to write response ...
	2023/12/18 22:40:29 Ready to marshal response ...
	2023/12/18 22:40:29 Ready to write response ...
	2023/12/18 22:40:39 Ready to marshal response ...
	2023/12/18 22:40:39 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:40:49 up 23 min,  0 users,  load average: 1.10, 1.04, 0.46
	Linux addons-277112 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [c2f563474405] <==
	* W1218 22:40:07.574494       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1218 22:40:12.150279       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1218 22:40:12.467100       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.22.198"}
	I1218 22:40:22.384076       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 22:40:22.384131       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 22:40:22.404649       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 22:40:22.404882       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 22:40:22.425037       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 22:40:22.425079       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 22:40:22.533796       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 22:40:22.533859       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 22:40:22.544504       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 22:40:22.544625       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 22:40:22.574205       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 22:40:22.574279       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 22:40:22.576483       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1218 22:40:22.580741       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1218 22:40:23.407772       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.79.44"}
	W1218 22:40:23.439941       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1218 22:40:23.577293       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1218 22:40:23.601644       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1218 22:40:40.658573       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1218 22:40:40.661412       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1218 22:40:40.664380       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1218 22:40:47.612671       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [75c7072d77a1] <==
	* W1218 22:40:29.701261       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:40:29.701289       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 22:40:29.734625       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1218 22:40:31.285677       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:40:31.285721       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 22:40:31.782525       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1218 22:40:33.413096       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:40:33.413127       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 22:40:39.793528       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:40:39.793786       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 22:40:40.337539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="4.947µs"
	I1218 22:40:41.081606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="4.669µs"
	I1218 22:40:41.083664       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1218 22:40:41.088627       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1218 22:40:41.572662       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.113µs"
	W1218 22:40:42.563647       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:40:42.563686       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 22:40:43.485456       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:40:43.485488       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1218 22:40:45.452757       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1218 22:40:45.452957       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1218 22:40:46.922917       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1218 22:40:46.922956       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 22:40:47.356651       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1218 22:40:47.356695       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [418eb7e108fd] <==
	* I1218 22:38:18.606743       1 server_others.go:69] "Using iptables proxy"
	I1218 22:38:18.632673       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1218 22:38:18.679539       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1218 22:38:18.681757       1 server_others.go:152] "Using iptables Proxier"
	I1218 22:38:18.681815       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1218 22:38:18.681826       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1218 22:38:18.681900       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 22:38:18.682125       1 server.go:846] "Version info" version="v1.28.4"
	I1218 22:38:18.682135       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 22:38:18.684668       1 config.go:188] "Starting service config controller"
	I1218 22:38:18.684684       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 22:38:18.684703       1 config.go:97] "Starting endpoint slice config controller"
	I1218 22:38:18.684706       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 22:38:18.691379       1 config.go:315] "Starting node config controller"
	I1218 22:38:18.691416       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 22:38:18.785351       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 22:38:18.785407       1 shared_informer.go:318] Caches are synced for service config
	I1218 22:38:18.797034       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [df5e9951a8c9] <==
	* W1218 22:38:01.785117       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 22:38:01.789197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1218 22:38:01.785167       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 22:38:01.789226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1218 22:38:01.785216       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 22:38:01.789254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1218 22:38:01.785256       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 22:38:01.789268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1218 22:38:01.785309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 22:38:01.789282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1218 22:38:01.785346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 22:38:01.789302       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1218 22:38:01.785389       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 22:38:01.789325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 22:38:02.597330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1218 22:38:02.597535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 22:38:02.623389       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 22:38:02.623596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 22:38:02.657757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 22:38:02.657793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 22:38:02.731937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 22:38:02.732148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1218 22:38:02.939062       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 22:38:02.939269       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1218 22:38:04.840973       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.815951    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnrjv\" (UniqueName: \"kubernetes.io/projected/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-kube-api-access-qnrjv\") pod \"b4f90fb5-47c3-4f62-904a-e74ea578c0ce\" (UID: \"b4f90fb5-47c3-4f62-904a-e74ea578c0ce\") "
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.816165    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-gcp-creds\") pod \"b4f90fb5-47c3-4f62-904a-e74ea578c0ce\" (UID: \"b4f90fb5-47c3-4f62-904a-e74ea578c0ce\") "
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.816253    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-data\") pod \"b4f90fb5-47c3-4f62-904a-e74ea578c0ce\" (UID: \"b4f90fb5-47c3-4f62-904a-e74ea578c0ce\") "
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.816349    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-script\") pod \"b4f90fb5-47c3-4f62-904a-e74ea578c0ce\" (UID: \"b4f90fb5-47c3-4f62-904a-e74ea578c0ce\") "
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.816777    2303 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b4f90fb5-47c3-4f62-904a-e74ea578c0ce" (UID: "b4f90fb5-47c3-4f62-904a-e74ea578c0ce"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.816902    2303 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-data" (OuterVolumeSpecName: "data") pod "b4f90fb5-47c3-4f62-904a-e74ea578c0ce" (UID: "b4f90fb5-47c3-4f62-904a-e74ea578c0ce"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.817112    2303 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-script" (OuterVolumeSpecName: "script") pod "b4f90fb5-47c3-4f62-904a-e74ea578c0ce" (UID: "b4f90fb5-47c3-4f62-904a-e74ea578c0ce"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.818379    2303 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-kube-api-access-qnrjv" (OuterVolumeSpecName: "kube-api-access-qnrjv") pod "b4f90fb5-47c3-4f62-904a-e74ea578c0ce" (UID: "b4f90fb5-47c3-4f62-904a-e74ea578c0ce"). InnerVolumeSpecName "kube-api-access-qnrjv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.916897    2303 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qnrjv\" (UniqueName: \"kubernetes.io/projected/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-kube-api-access-qnrjv\") on node \"addons-277112\" DevicePath \"\""
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.916942    2303 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-gcp-creds\") on node \"addons-277112\" DevicePath \"\""
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.916956    2303 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-data\") on node \"addons-277112\" DevicePath \"\""
	Dec 18 22:40:42 addons-277112 kubelet[2303]: I1218 22:40:42.916967    2303 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b4f90fb5-47c3-4f62-904a-e74ea578c0ce-script\") on node \"addons-277112\" DevicePath \"\""
	Dec 18 22:40:43 addons-277112 kubelet[2303]: I1218 22:40:43.625742    2303 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6ab44900c3eb1822539523cb68c03e7ce21cd7a1dd4e367a2928de8db1ec56"
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.426034    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d83afefe-c6cb-4591-8fa1-316fcb7fe216-webhook-cert\") pod \"d83afefe-c6cb-4591-8fa1-316fcb7fe216\" (UID: \"d83afefe-c6cb-4591-8fa1-316fcb7fe216\") "
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.426091    2303 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmr4l\" (UniqueName: \"kubernetes.io/projected/d83afefe-c6cb-4591-8fa1-316fcb7fe216-kube-api-access-xmr4l\") pod \"d83afefe-c6cb-4591-8fa1-316fcb7fe216\" (UID: \"d83afefe-c6cb-4591-8fa1-316fcb7fe216\") "
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.430512    2303 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83afefe-c6cb-4591-8fa1-316fcb7fe216-kube-api-access-xmr4l" (OuterVolumeSpecName: "kube-api-access-xmr4l") pod "d83afefe-c6cb-4591-8fa1-316fcb7fe216" (UID: "d83afefe-c6cb-4591-8fa1-316fcb7fe216"). InnerVolumeSpecName "kube-api-access-xmr4l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.430696    2303 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83afefe-c6cb-4591-8fa1-316fcb7fe216-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d83afefe-c6cb-4591-8fa1-316fcb7fe216" (UID: "d83afefe-c6cb-4591-8fa1-316fcb7fe216"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.527160    2303 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d83afefe-c6cb-4591-8fa1-316fcb7fe216-webhook-cert\") on node \"addons-277112\" DevicePath \"\""
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.527202    2303 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xmr4l\" (UniqueName: \"kubernetes.io/projected/d83afefe-c6cb-4591-8fa1-316fcb7fe216-kube-api-access-xmr4l\") on node \"addons-277112\" DevicePath \"\""
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.643326    2303 scope.go:117] "RemoveContainer" containerID="098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd"
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.660839    2303 scope.go:117] "RemoveContainer" containerID="098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd"
	Dec 18 22:40:44 addons-277112 kubelet[2303]: E1218 22:40:44.664674    2303 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd" containerID="098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd"
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.664723    2303 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd"} err="failed to get container status \"098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd\": rpc error: code = Unknown desc = Error response from daemon: No such container: 098cf56ed5bd66761bd156ff34f9a45935bf7d500cae995ef39b3f319a4de9bd"
	Dec 18 22:40:44 addons-277112 kubelet[2303]: I1218 22:40:44.756923    2303 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d83afefe-c6cb-4591-8fa1-316fcb7fe216" path="/var/lib/kubelet/pods/d83afefe-c6cb-4591-8fa1-316fcb7fe216/volumes"
	Dec 18 22:40:46 addons-277112 kubelet[2303]: I1218 22:40:46.758054    2303 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b4f90fb5-47c3-4f62-904a-e74ea578c0ce" path="/var/lib/kubelet/pods/b4f90fb5-47c3-4f62-904a-e74ea578c0ce/volumes"
	
	* 
	* ==> storage-provisioner [73696f22931f] <==
	* I1218 22:38:25.397292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 22:38:25.418243       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 22:38:25.418331       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 22:38:25.425517       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 22:38:25.427152       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7e42257c-75b4-4be1-984d-aaabe557b7c0", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-277112_8ba365b5-e2e8-4891-9fe0-a361766ea321 became leader
	I1218 22:38:25.427245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-277112_8ba365b5-e2e8-4891-9fe0-a361766ea321!
	I1218 22:38:25.530391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-277112_8ba365b5-e2e8-4891-9fe0-a361766ea321!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-277112 -n addons-277112
helpers_test.go:261: (dbg) Run:  kubectl --context addons-277112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-319045 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-319045 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.616072044s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-319045 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-319045 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [444c0c8f-f3df-4e61-8b17-79875a1556ef] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [444c0c8f-f3df-4e61-8b17-79875a1556ef] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.002972618s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-319045 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-319045 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-319045 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1218 22:49:37.876447    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.010291304s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-319045 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-319045 addons disable ingress-dns --alsologtostderr -v=1: (12.140114353s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-319045 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-319045 addons disable ingress --alsologtostderr -v=1: (7.476371834s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-319045
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-319045:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a652e14b79ae9402053401bb9dfa7b08d558453f06d9be4eb70e80dd274391fb",
	        "Created": "2023-12-18T22:47:31.009979017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 54969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-18T22:47:31.326221882Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1814c03459e4d6cfdad8e09772588ae07599742dd01f942aa2f9fe1dbb6d2813",
	        "ResolvConfPath": "/var/lib/docker/containers/a652e14b79ae9402053401bb9dfa7b08d558453f06d9be4eb70e80dd274391fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a652e14b79ae9402053401bb9dfa7b08d558453f06d9be4eb70e80dd274391fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/a652e14b79ae9402053401bb9dfa7b08d558453f06d9be4eb70e80dd274391fb/hosts",
	        "LogPath": "/var/lib/docker/containers/a652e14b79ae9402053401bb9dfa7b08d558453f06d9be4eb70e80dd274391fb/a652e14b79ae9402053401bb9dfa7b08d558453f06d9be4eb70e80dd274391fb-json.log",
	        "Name": "/ingress-addon-legacy-319045",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-319045:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-319045",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f3375b885a7899250f6e2a00f9b357ac60b27849331717a1dee1884285ac6fc2-init/diff:/var/lib/docker/overlay2/bc6e43a078e26c3419854bafc48fcee558a938ae61de23978bcedc185e547bd8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f3375b885a7899250f6e2a00f9b357ac60b27849331717a1dee1884285ac6fc2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f3375b885a7899250f6e2a00f9b357ac60b27849331717a1dee1884285ac6fc2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f3375b885a7899250f6e2a00f9b357ac60b27849331717a1dee1884285ac6fc2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-319045",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-319045/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-319045",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-319045",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-319045",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ffd4accac0cdb67f09008d26105202f8a82a7d70cfed262b0e0fa9587f8ec5e5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ffd4accac0cd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-319045": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a652e14b79ae",
	                        "ingress-addon-legacy-319045"
	                    ],
	                    "NetworkID": "f6d13f8b2d3fb42db369899dcc7038ae777c601804e24733f251afbb335d845b",
	                    "EndpointID": "b011ef7824b92de953dbe9f294260ae4d162014d269dac2d9fac77f905f1d696",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-319045 -n ingress-addon-legacy-319045
E1218 22:50:05.563439    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-319045 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-319045 logs -n 25: (1.011965492s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-790753                     | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC |                     |
	|                | --kill=true                              |                             |         |         |                     |                     |
	| update-context | functional-790753                        | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-790753                        | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-790753                        | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-790753                        | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-790753                        | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-790753 ssh pgrep              | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-790753 image build -t         | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	|                | localhost/my-image:functional-790753     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-790753 image ls               | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	| image          | functional-790753                        | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-790753                        | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-790753                     | functional-790753           | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:46 UTC |
	| start          | -p image-917364                          | image-917364                | jenkins | v1.32.0 | 18 Dec 23 22:46 UTC | 18 Dec 23 22:47 UTC |
	|                | --driver=docker                          |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-917364                | jenkins | v1.32.0 | 18 Dec 23 22:47 UTC | 18 Dec 23 22:47 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-917364                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-917364                | jenkins | v1.32.0 | 18 Dec 23 22:47 UTC | 18 Dec 23 22:47 UTC |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-917364                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-917364                | jenkins | v1.32.0 | 18 Dec 23 22:47 UTC | 18 Dec 23 22:47 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-917364                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-917364                | jenkins | v1.32.0 | 18 Dec 23 22:47 UTC | 18 Dec 23 22:47 UTC |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-917364                          |                             |         |         |                     |                     |
	| delete         | -p image-917364                          | image-917364                | jenkins | v1.32.0 | 18 Dec 23 22:47 UTC | 18 Dec 23 22:47 UTC |
	| start          | -p ingress-addon-legacy-319045           | ingress-addon-legacy-319045 | jenkins | v1.32.0 | 18 Dec 23 22:47 UTC | 18 Dec 23 22:48 UTC |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                     |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-319045              | ingress-addon-legacy-319045 | jenkins | v1.32.0 | 18 Dec 23 22:48 UTC | 18 Dec 23 22:49 UTC |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-319045              | ingress-addon-legacy-319045 | jenkins | v1.32.0 | 18 Dec 23 22:49 UTC | 18 Dec 23 22:49 UTC |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-319045              | ingress-addon-legacy-319045 | jenkins | v1.32.0 | 18 Dec 23 22:49 UTC | 18 Dec 23 22:49 UTC |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-319045 ip           | ingress-addon-legacy-319045 | jenkins | v1.32.0 | 18 Dec 23 22:49 UTC | 18 Dec 23 22:49 UTC |
	| addons         | ingress-addon-legacy-319045              | ingress-addon-legacy-319045 | jenkins | v1.32.0 | 18 Dec 23 22:49 UTC | 18 Dec 23 22:49 UTC |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-319045              | ingress-addon-legacy-319045 | jenkins | v1.32.0 | 18 Dec 23 22:49 UTC | 18 Dec 23 22:50 UTC |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:47:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:47:15.243672   54527 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:47:15.243829   54527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:47:15.243840   54527 out.go:309] Setting ErrFile to fd 2...
	I1218 22:47:15.243846   54527 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:47:15.244097   54527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	I1218 22:47:15.244489   54527 out.go:303] Setting JSON to false
	I1218 22:47:15.245311   54527 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1785,"bootTime":1702937851,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1218 22:47:15.245379   54527 start.go:138] virtualization:  
	I1218 22:47:15.248236   54527 out.go:177] * [ingress-addon-legacy-319045] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 22:47:15.250622   54527 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 22:47:15.252594   54527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:47:15.250741   54527 notify.go:220] Checking for updates...
	I1218 22:47:15.257063   54527 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:47:15.259226   54527 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	I1218 22:47:15.261230   54527 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 22:47:15.263240   54527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 22:47:15.265400   54527 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:47:15.292736   54527 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 22:47:15.292857   54527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:47:15.385771   54527 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-18 22:47:15.37703764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:47:15.385870   54527 docker.go:295] overlay module found
	I1218 22:47:15.388363   54527 out.go:177] * Using the docker driver based on user configuration
	I1218 22:47:15.390218   54527 start.go:298] selected driver: docker
	I1218 22:47:15.390235   54527 start.go:902] validating driver "docker" against <nil>
	I1218 22:47:15.390248   54527 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 22:47:15.390840   54527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:47:15.464375   54527 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-18 22:47:15.455525913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:47:15.464518   54527 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 22:47:15.464753   54527 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 22:47:15.466929   54527 out.go:177] * Using Docker driver with root privileges
	I1218 22:47:15.468901   54527 cni.go:84] Creating CNI manager for ""
	I1218 22:47:15.468926   54527 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1218 22:47:15.468938   54527 start_flags.go:323] config:
	{Name:ingress-addon-legacy-319045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-319045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:47:15.471222   54527 out.go:177] * Starting control plane node ingress-addon-legacy-319045 in cluster ingress-addon-legacy-319045
	I1218 22:47:15.473121   54527 cache.go:121] Beginning downloading kic base image for docker with docker
	I1218 22:47:15.475741   54527 out.go:177] * Pulling base image v0.0.42-1702920864-17822 ...
	I1218 22:47:15.477889   54527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1218 22:47:15.477964   54527 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 22:47:15.494636   54527 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon, skipping pull
	I1218 22:47:15.494657   54527 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in daemon, skipping load
	I1218 22:47:15.555746   54527 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1218 22:47:15.555765   54527 cache.go:56] Caching tarball of preloaded images
	I1218 22:47:15.555920   54527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1218 22:47:15.558413   54527 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1218 22:47:15.560694   54527 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1218 22:47:15.680495   54527 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1218 22:47:23.705609   54527 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1218 22:47:23.705716   54527 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1218 22:47:24.813644   54527 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1218 22:47:24.814020   54527 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/config.json ...
	I1218 22:47:24.814054   54527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/config.json: {Name:mk0d8e233cd12b56da43db8843751b939c5809ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:47:24.814227   54527 cache.go:194] Successfully downloaded all kic artifacts
	I1218 22:47:24.814287   54527 start.go:365] acquiring machines lock for ingress-addon-legacy-319045: {Name:mk7db636ad644622a2417df6ecd4d7e07af6e5f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 22:47:24.814345   54527 start.go:369] acquired machines lock for "ingress-addon-legacy-319045" in 44.308µs
	I1218 22:47:24.814368   54527 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-319045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-319045 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 22:47:24.814438   54527 start.go:125] createHost starting for "" (driver="docker")
	I1218 22:47:24.817071   54527 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1218 22:47:24.817285   54527 start.go:159] libmachine.API.Create for "ingress-addon-legacy-319045" (driver="docker")
	I1218 22:47:24.817305   54527 client.go:168] LocalClient.Create starting
	I1218 22:47:24.817381   54527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem
	I1218 22:47:24.817413   54527 main.go:141] libmachine: Decoding PEM data...
	I1218 22:47:24.817433   54527 main.go:141] libmachine: Parsing certificate...
	I1218 22:47:24.817488   54527 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem
	I1218 22:47:24.817510   54527 main.go:141] libmachine: Decoding PEM data...
	I1218 22:47:24.817524   54527 main.go:141] libmachine: Parsing certificate...
	I1218 22:47:24.817872   54527 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-319045 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1218 22:47:24.837405   54527 cli_runner.go:211] docker network inspect ingress-addon-legacy-319045 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1218 22:47:24.837489   54527 network_create.go:281] running [docker network inspect ingress-addon-legacy-319045] to gather additional debugging logs...
	I1218 22:47:24.837514   54527 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-319045
	W1218 22:47:24.854042   54527 cli_runner.go:211] docker network inspect ingress-addon-legacy-319045 returned with exit code 1
	I1218 22:47:24.854078   54527 network_create.go:284] error running [docker network inspect ingress-addon-legacy-319045]: docker network inspect ingress-addon-legacy-319045: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-319045 not found
	I1218 22:47:24.854093   54527 network_create.go:286] output of [docker network inspect ingress-addon-legacy-319045]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-319045 not found
	
	** /stderr **
	I1218 22:47:24.854206   54527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 22:47:24.870889   54527 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001fdc0e0}
	I1218 22:47:24.870926   54527 network_create.go:124] attempt to create docker network ingress-addon-legacy-319045 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1218 22:47:24.870983   54527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-319045 ingress-addon-legacy-319045
	I1218 22:47:24.938747   54527 network_create.go:108] docker network ingress-addon-legacy-319045 192.168.49.0/24 created
	I1218 22:47:24.938783   54527 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-319045" container
	I1218 22:47:24.938857   54527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1218 22:47:24.954680   54527 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-319045 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-319045 --label created_by.minikube.sigs.k8s.io=true
	I1218 22:47:24.972495   54527 oci.go:103] Successfully created a docker volume ingress-addon-legacy-319045
	I1218 22:47:24.972604   54527 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-319045-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-319045 --entrypoint /usr/bin/test -v ingress-addon-legacy-319045:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib
	I1218 22:47:26.356172   54527 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-319045-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-319045 --entrypoint /usr/bin/test -v ingress-addon-legacy-319045:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -d /var/lib: (1.383526285s)
	I1218 22:47:26.356205   54527 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-319045
	I1218 22:47:26.356218   54527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1218 22:47:26.356236   54527 kic.go:194] Starting extracting preloaded images to volume ...
	I1218 22:47:26.356325   54527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-319045:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1218 22:47:30.928970   54527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-319045:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.572602463s)
	I1218 22:47:30.929000   54527 kic.go:203] duration metric: took 4.572762 seconds to extract preloaded images to volume
	W1218 22:47:30.929128   54527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1218 22:47:30.929248   54527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1218 22:47:30.994558   54527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-319045 --name ingress-addon-legacy-319045 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-319045 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-319045 --network ingress-addon-legacy-319045 --ip 192.168.49.2 --volume ingress-addon-legacy-319045:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1218 22:47:31.334705   54527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-319045 --format={{.State.Running}}
	I1218 22:47:31.368977   54527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-319045 --format={{.State.Status}}
	I1218 22:47:31.391883   54527 cli_runner.go:164] Run: docker exec ingress-addon-legacy-319045 stat /var/lib/dpkg/alternatives/iptables
	I1218 22:47:31.467686   54527 oci.go:144] the created container "ingress-addon-legacy-319045" has a running status.
	I1218 22:47:31.467711   54527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa...
	I1218 22:47:31.815113   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1218 22:47:31.815250   54527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1218 22:47:31.843418   54527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-319045 --format={{.State.Status}}
	I1218 22:47:31.872805   54527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1218 22:47:31.872826   54527 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-319045 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1218 22:47:31.951942   54527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-319045 --format={{.State.Status}}
	I1218 22:47:31.997865   54527 machine.go:88] provisioning docker machine ...
	I1218 22:47:31.997891   54527 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-319045"
	I1218 22:47:31.997960   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:32.023369   54527 main.go:141] libmachine: Using SSH client type: native
	I1218 22:47:32.023779   54527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1218 22:47:32.023792   54527 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-319045 && echo "ingress-addon-legacy-319045" | sudo tee /etc/hostname
	I1218 22:47:32.231558   54527 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-319045
	
	I1218 22:47:32.231627   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:32.254329   54527 main.go:141] libmachine: Using SSH client type: native
	I1218 22:47:32.254723   54527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1218 22:47:32.254747   54527 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-319045' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-319045/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-319045' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 22:47:32.409280   54527 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 22:47:32.409307   54527 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17822-2192/.minikube CaCertPath:/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17822-2192/.minikube}
	I1218 22:47:32.409330   54527 ubuntu.go:177] setting up certificates
	I1218 22:47:32.409339   54527 provision.go:83] configureAuth start
	I1218 22:47:32.409398   54527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-319045
	I1218 22:47:32.440878   54527 provision.go:138] copyHostCerts
	I1218 22:47:32.440916   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17822-2192/.minikube/ca.pem
	I1218 22:47:32.440945   54527 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-2192/.minikube/ca.pem, removing ...
	I1218 22:47:32.440956   54527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-2192/.minikube/ca.pem
	I1218 22:47:32.441019   54527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17822-2192/.minikube/ca.pem (1078 bytes)
	I1218 22:47:32.441089   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17822-2192/.minikube/cert.pem
	I1218 22:47:32.441111   54527 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-2192/.minikube/cert.pem, removing ...
	I1218 22:47:32.441116   54527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-2192/.minikube/cert.pem
	I1218 22:47:32.441141   54527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17822-2192/.minikube/cert.pem (1123 bytes)
	I1218 22:47:32.441206   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17822-2192/.minikube/key.pem
	I1218 22:47:32.441227   54527 exec_runner.go:144] found /home/jenkins/minikube-integration/17822-2192/.minikube/key.pem, removing ...
	I1218 22:47:32.441236   54527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17822-2192/.minikube/key.pem
	I1218 22:47:32.441264   54527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17822-2192/.minikube/key.pem (1675 bytes)
	I1218 22:47:32.441315   54527 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17822-2192/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-319045 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-319045]
	I1218 22:47:32.895198   54527 provision.go:172] copyRemoteCerts
	I1218 22:47:32.895268   54527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 22:47:32.895312   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:32.912250   54527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa Username:docker}
	I1218 22:47:33.014690   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 22:47:33.014753   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1218 22:47:33.042702   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 22:47:33.042806   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1218 22:47:33.070545   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 22:47:33.070606   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 22:47:33.098655   54527 provision.go:86] duration metric: configureAuth took 689.299659ms
	I1218 22:47:33.098682   54527 ubuntu.go:193] setting minikube options for container-runtime
	I1218 22:47:33.098875   54527 config.go:182] Loaded profile config "ingress-addon-legacy-319045": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1218 22:47:33.098936   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:33.117534   54527 main.go:141] libmachine: Using SSH client type: native
	I1218 22:47:33.117955   54527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1218 22:47:33.117974   54527 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 22:47:33.269916   54527 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1218 22:47:33.269939   54527 ubuntu.go:71] root file system type: overlay
	I1218 22:47:33.270076   54527 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 22:47:33.270152   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:33.287915   54527 main.go:141] libmachine: Using SSH client type: native
	I1218 22:47:33.288324   54527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1218 22:47:33.288444   54527 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 22:47:33.446254   54527 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 22:47:33.446341   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:33.464303   54527 main.go:141] libmachine: Using SSH client type: native
	I1218 22:47:33.464746   54527 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3beb50] 0x3c12c0 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1218 22:47:33.464771   54527 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 22:47:34.270893   54527 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-18 22:47:33.441324539 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1218 22:47:34.270924   54527 machine.go:91] provisioned docker machine in 2.273042297s
	I1218 22:47:34.270936   54527 client.go:171] LocalClient.Create took 9.453625613s
	I1218 22:47:34.270948   54527 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-319045" took 9.453663521s
	I1218 22:47:34.270957   54527 start.go:300] post-start starting for "ingress-addon-legacy-319045" (driver="docker")
	I1218 22:47:34.270966   54527 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 22:47:34.271042   54527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 22:47:34.271084   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:34.291365   54527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa Username:docker}
	I1218 22:47:34.399170   54527 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 22:47:34.403018   54527 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1218 22:47:34.403059   54527 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1218 22:47:34.403071   54527 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1218 22:47:34.403078   54527 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1218 22:47:34.403087   54527 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-2192/.minikube/addons for local assets ...
	I1218 22:47:34.403143   54527 filesync.go:126] Scanning /home/jenkins/minikube-integration/17822-2192/.minikube/files for local assets ...
	I1218 22:47:34.403220   54527 filesync.go:149] local asset: /home/jenkins/minikube-integration/17822-2192/.minikube/files/etc/ssl/certs/74892.pem -> 74892.pem in /etc/ssl/certs
	I1218 22:47:34.403226   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/files/etc/ssl/certs/74892.pem -> /etc/ssl/certs/74892.pem
	I1218 22:47:34.403321   54527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 22:47:34.413065   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/files/etc/ssl/certs/74892.pem --> /etc/ssl/certs/74892.pem (1708 bytes)
	I1218 22:47:34.439028   54527 start.go:303] post-start completed in 168.057345ms
	I1218 22:47:34.439379   54527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-319045
	I1218 22:47:34.459019   54527 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/config.json ...
	I1218 22:47:34.459280   54527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 22:47:34.459319   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:34.478114   54527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa Username:docker}
	I1218 22:47:34.578797   54527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1218 22:47:34.583913   54527 start.go:128] duration metric: createHost completed in 9.769463019s
	I1218 22:47:34.583943   54527 start.go:83] releasing machines lock for "ingress-addon-legacy-319045", held for 9.76958467s
	I1218 22:47:34.584049   54527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-319045
	I1218 22:47:34.600270   54527 ssh_runner.go:195] Run: cat /version.json
	I1218 22:47:34.600306   54527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 22:47:34.600326   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:34.600372   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:47:34.624673   54527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa Username:docker}
	I1218 22:47:34.632860   54527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa Username:docker}
	I1218 22:47:34.724618   54527 ssh_runner.go:195] Run: systemctl --version
	I1218 22:47:34.864029   54527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 22:47:34.869375   54527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1218 22:47:34.898305   54527 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1218 22:47:34.898394   54527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1218 22:47:34.919014   54527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1218 22:47:34.937878   54527 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 22:47:34.937925   54527 start.go:475] detecting cgroup driver to use...
	I1218 22:47:34.937974   54527 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 22:47:34.938113   54527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 22:47:34.956916   54527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1218 22:47:34.967581   54527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 22:47:34.978654   54527 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 22:47:34.978723   54527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 22:47:34.989492   54527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 22:47:35.000883   54527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 22:47:35.012783   54527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 22:47:35.024758   54527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 22:47:35.035810   54527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 22:47:35.046905   54527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 22:47:35.056623   54527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 22:47:35.066455   54527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:47:35.152588   54527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 22:47:35.268817   54527 start.go:475] detecting cgroup driver to use...
	I1218 22:47:35.268891   54527 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1218 22:47:35.268972   54527 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 22:47:35.287839   54527 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1218 22:47:35.287935   54527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 22:47:35.300614   54527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 22:47:35.318837   54527 ssh_runner.go:195] Run: which cri-dockerd
	I1218 22:47:35.323298   54527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 22:47:35.332675   54527 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 22:47:35.355993   54527 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 22:47:35.458230   54527 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 22:47:35.567316   54527 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 22:47:35.567469   54527 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 22:47:35.588111   54527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:47:35.690382   54527 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 22:47:35.960281   54527 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 22:47:35.988003   54527 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 22:47:36.019577   54527 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1218 22:47:36.019689   54527 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-319045 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1218 22:47:36.037839   54527 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1218 22:47:36.042351   54527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 22:47:36.055738   54527 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1218 22:47:36.055803   54527 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 22:47:36.076672   54527 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1218 22:47:36.076696   54527 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1218 22:47:36.076756   54527 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1218 22:47:36.087529   54527 ssh_runner.go:195] Run: which lz4
	I1218 22:47:36.091912   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1218 22:47:36.092004   54527 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1218 22:47:36.096329   54527 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1218 22:47:36.096368   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1218 22:47:37.963114   54527 docker.go:635] Took 1.871131 seconds to copy over tarball
	I1218 22:47:37.963227   54527 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1218 22:47:40.473266   54527 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.509987852s)
	I1218 22:47:40.473290   54527 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1218 22:47:40.552851   54527 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1218 22:47:40.562835   54527 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1218 22:47:40.582964   54527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 22:47:40.669038   54527 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 22:47:42.121290   54527 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.452216933s)
	I1218 22:47:42.121378   54527 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 22:47:42.143214   54527 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1218 22:47:42.143232   54527 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1218 22:47:42.143242   54527 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1218 22:47:42.145940   54527 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 22:47:42.146096   54527 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 22:47:42.145940   54527 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1218 22:47:42.146208   54527 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1218 22:47:42.146231   54527 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1218 22:47:42.146301   54527 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:47:42.146314   54527 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 22:47:42.146356   54527 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1218 22:47:42.148097   54527 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1218 22:47:42.148158   54527 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 22:47:42.148249   54527 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 22:47:42.148454   54527 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 22:47:42.148589   54527 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1218 22:47:42.148686   54527 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1218 22:47:42.148774   54527 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:47:42.148097   54527 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	W1218 22:47:42.494837   54527 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 22:47:42.495107   54527 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1218 22:47:42.502314   54527 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 22:47:42.502544   54527 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1218 22:47:42.508322   54527 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 22:47:42.508500   54527 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1218 22:47:42.513011   54527 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1218 22:47:42.513237   54527 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1218 22:47:42.526118   54527 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1218 22:47:42.527516   54527 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1218 22:47:42.527578   54527 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1218 22:47:42.527628   54527 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	W1218 22:47:42.530467   54527 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1218 22:47:42.530690   54527 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1218 22:47:42.563457   54527 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1218 22:47:42.563695   54527 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 22:47:42.601905   54527 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1218 22:47:42.602214   54527 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1218 22:47:42.602087   54527 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1218 22:47:42.602288   54527 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1218 22:47:42.602322   54527 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1218 22:47:42.602117   54527 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1218 22:47:42.602376   54527 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1218 22:47:42.602405   54527 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1218 22:47:42.602168   54527 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1218 22:47:42.602441   54527 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1218 22:47:42.602459   54527 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1218 22:47:42.602536   54527 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1218 22:47:42.617838   54527 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1218 22:47:42.617882   54527 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1218 22:47:42.617929   54527 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1218 22:47:42.619030   54527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1218 22:47:42.624162   54527 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1218 22:47:42.624203   54527 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1218 22:47:42.624249   54527 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	W1218 22:47:42.682280   54527 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1218 22:47:42.682495   54527 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:47:42.683662   54527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1218 22:47:42.683754   54527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1218 22:47:42.683823   54527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1218 22:47:42.683887   54527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1218 22:47:42.701362   54527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1218 22:47:42.715324   54527 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1218 22:47:42.715404   54527 docker.go:323] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:47:42.715487   54527 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:47:42.716696   54527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1218 22:47:42.746765   54527 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1218 22:47:42.746841   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 -> /var/lib/minikube/images/storage-provisioner_v5
	I1218 22:47:42.746944   54527 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1218 22:47:42.751346   54527 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1218 22:47:42.751379   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1218 22:47:42.824200   54527 docker.go:290] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1218 22:47:42.824281   54527 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1218 22:47:43.059838   54527 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1218 22:47:43.059933   54527 cache_images.go:92] LoadImages completed in 916.675426ms
	W1218 22:47:43.060026   54527 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17822-2192/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1218 22:47:43.060105   54527 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1218 22:47:43.120661   54527 cni.go:84] Creating CNI manager for ""
	I1218 22:47:43.120731   54527 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1218 22:47:43.121325   54527 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 22:47:43.121352   54527 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-319045 NodeName:ingress-addon-legacy-319045 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1218 22:47:43.121494   54527 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-319045"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 22:47:43.121563   54527 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-319045 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-319045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 22:47:43.121637   54527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1218 22:47:43.131951   54527 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 22:47:43.132018   54527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 22:47:43.141363   54527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1218 22:47:43.160886   54527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1218 22:47:43.180420   54527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1218 22:47:43.199551   54527 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1218 22:47:43.203731   54527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 22:47:43.215894   54527 certs.go:56] Setting up /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045 for IP: 192.168.49.2
	I1218 22:47:43.215920   54527 certs.go:190] acquiring lock for shared ca certs: {Name:mkcf78e809e515e2090b1ff7ca96510a1c2d2b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:47:43.216093   54527 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17822-2192/.minikube/ca.key
	I1218 22:47:43.216145   54527 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.key
	I1218 22:47:43.216201   54527 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.key
	I1218 22:47:43.216217   54527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt with IP's: []
	I1218 22:47:43.446570   54527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt ...
	I1218 22:47:43.446600   54527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: {Name:mk21fbf13ee9d69e3b497ecad7c843aefda4bec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:47:43.446806   54527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.key ...
	I1218 22:47:43.446822   54527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.key: {Name:mk7d8831b5aa5656ce2d54796a89d1256b8f3114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:47:43.446916   54527 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.key.dd3b5fb2
	I1218 22:47:43.446934   54527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1218 22:47:43.794709   54527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.crt.dd3b5fb2 ...
	I1218 22:47:43.794745   54527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.crt.dd3b5fb2: {Name:mkd016c177ea4585eb55d45c1095a290b8e1dda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:47:43.794927   54527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.key.dd3b5fb2 ...
	I1218 22:47:43.794942   54527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.key.dd3b5fb2: {Name:mkd842449afc9d773b614335f60de25ea5a5caf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:47:43.795020   54527 certs.go:337] copying /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.crt
	I1218 22:47:43.795101   54527 certs.go:341] copying /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.key
	I1218 22:47:43.795160   54527 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.key
	I1218 22:47:43.795196   54527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.crt with IP's: []
	I1218 22:47:44.172767   54527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.crt ...
	I1218 22:47:44.172798   54527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.crt: {Name:mkabaac605c4eefccbfe0f80257611117aa7eb3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:47:44.172968   54527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.key ...
	I1218 22:47:44.172982   54527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.key: {Name:mk6b7ed63b6f9ee2edf680252e4b8ca1532255a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:47:44.173056   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 22:47:44.173077   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 22:47:44.173093   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 22:47:44.173109   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 22:47:44.173120   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 22:47:44.173136   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 22:47:44.173147   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 22:47:44.173162   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 22:47:44.173225   54527 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/7489.pem (1338 bytes)
	W1218 22:47:44.173264   54527 certs.go:433] ignoring /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/7489_empty.pem, impossibly tiny 0 bytes
	I1218 22:47:44.173276   54527 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca-key.pem (1679 bytes)
	I1218 22:47:44.173302   54527 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/ca.pem (1078 bytes)
	I1218 22:47:44.173329   54527 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/cert.pem (1123 bytes)
	I1218 22:47:44.173355   54527 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/home/jenkins/minikube-integration/17822-2192/.minikube/certs/key.pem (1675 bytes)
	I1218 22:47:44.173407   54527 certs.go:437] found cert: /home/jenkins/minikube-integration/17822-2192/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17822-2192/.minikube/files/etc/ssl/certs/74892.pem (1708 bytes)
	I1218 22:47:44.173438   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/certs/7489.pem -> /usr/share/ca-certificates/7489.pem
	I1218 22:47:44.173458   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/files/etc/ssl/certs/74892.pem -> /usr/share/ca-certificates/74892.pem
	I1218 22:47:44.173474   54527 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:47:44.174038   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 22:47:44.200031   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 22:47:44.225722   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 22:47:44.252252   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 22:47:44.278957   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 22:47:44.305641   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 22:47:44.332454   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 22:47:44.358901   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1218 22:47:44.385846   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/certs/7489.pem --> /usr/share/ca-certificates/7489.pem (1338 bytes)
	I1218 22:47:44.412067   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/files/etc/ssl/certs/74892.pem --> /usr/share/ca-certificates/74892.pem (1708 bytes)
	I1218 22:47:44.439129   54527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 22:47:44.466178   54527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 22:47:44.486002   54527 ssh_runner.go:195] Run: openssl version
	I1218 22:47:44.492653   54527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7489.pem && ln -fs /usr/share/ca-certificates/7489.pem /etc/ssl/certs/7489.pem"
	I1218 22:47:44.503617   54527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7489.pem
	I1218 22:47:44.508335   54527 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 22:42 /usr/share/ca-certificates/7489.pem
	I1218 22:47:44.508437   54527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7489.pem
	I1218 22:47:44.516515   54527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7489.pem /etc/ssl/certs/51391683.0"
	I1218 22:47:44.527415   54527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74892.pem && ln -fs /usr/share/ca-certificates/74892.pem /etc/ssl/certs/74892.pem"
	I1218 22:47:44.538642   54527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74892.pem
	I1218 22:47:44.543196   54527 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 22:42 /usr/share/ca-certificates/74892.pem
	I1218 22:47:44.543299   54527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74892.pem
	I1218 22:47:44.552191   54527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74892.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 22:47:44.563659   54527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 22:47:44.574862   54527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:47:44.579412   54527 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 22:37 /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:47:44.579474   54527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 22:47:44.588264   54527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 22:47:44.599486   54527 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 22:47:44.603725   54527 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1218 22:47:44.603771   54527 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-319045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-319045 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:47:44.603896   54527 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 22:47:44.624050   54527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 22:47:44.636065   54527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 22:47:44.646854   54527 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1218 22:47:44.646932   54527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 22:47:44.657653   54527 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 22:47:44.657695   54527 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1218 22:47:44.719073   54527 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1218 22:47:44.719269   54527 kubeadm.go:322] [preflight] Running pre-flight checks
	I1218 22:47:44.931531   54527 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1218 22:47:44.931622   54527 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I1218 22:47:44.931684   54527 kubeadm.go:322] DOCKER_VERSION: 24.0.7
	I1218 22:47:44.931722   54527 kubeadm.go:322] OS: Linux
	I1218 22:47:44.931772   54527 kubeadm.go:322] CGROUPS_CPU: enabled
	I1218 22:47:44.931821   54527 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1218 22:47:44.931869   54527 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1218 22:47:44.931918   54527 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1218 22:47:44.931967   54527 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1218 22:47:44.932016   54527 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1218 22:47:45.021165   54527 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1218 22:47:45.021379   54527 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1218 22:47:45.021523   54527 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1218 22:47:45.243822   54527 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 22:47:45.245689   54527 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 22:47:45.247919   54527 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1218 22:47:45.357692   54527 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 22:47:45.359745   54527 out.go:204]   - Generating certificates and keys ...
	I1218 22:47:45.359860   54527 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1218 22:47:45.359954   54527 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1218 22:47:45.735064   54527 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1218 22:47:46.019619   54527 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1218 22:47:46.368710   54527 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1218 22:47:46.618399   54527 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1218 22:47:46.733880   54527 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1218 22:47:46.734198   54527 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-319045 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 22:47:47.596819   54527 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1218 22:47:47.597217   54527 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-319045 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1218 22:47:48.310060   54527 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1218 22:47:48.518907   54527 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1218 22:47:48.875187   54527 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1218 22:47:48.875470   54527 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 22:47:49.455913   54527 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 22:47:50.025082   54527 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 22:47:50.419958   54527 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 22:47:50.935787   54527 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 22:47:50.936475   54527 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 22:47:50.939306   54527 out.go:204]   - Booting up control plane ...
	I1218 22:47:50.939403   54527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 22:47:50.953301   54527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 22:47:50.953385   54527 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 22:47:50.953462   54527 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 22:47:50.953618   54527 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1218 22:48:02.962682   54527 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.009959 seconds
	I1218 22:48:02.962868   54527 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1218 22:48:02.975882   54527 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1218 22:48:03.495138   54527 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1218 22:48:03.495293   54527 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-319045 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1218 22:48:04.018146   54527 kubeadm.go:322] [bootstrap-token] Using token: fkmm67.22pf0utr2lawj4m2
	I1218 22:48:04.021264   54527 out.go:204]   - Configuring RBAC rules ...
	I1218 22:48:04.021390   54527 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1218 22:48:04.031638   54527 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1218 22:48:04.040841   54527 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1218 22:48:04.046671   54527 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1218 22:48:04.054546   54527 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1218 22:48:04.060856   54527 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1218 22:48:04.080110   54527 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1218 22:48:04.347266   54527 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1218 22:48:04.434727   54527 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1218 22:48:04.435968   54527 kubeadm.go:322] 
	I1218 22:48:04.436047   54527 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1218 22:48:04.436075   54527 kubeadm.go:322] 
	I1218 22:48:04.436153   54527 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1218 22:48:04.436164   54527 kubeadm.go:322] 
	I1218 22:48:04.436194   54527 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1218 22:48:04.436254   54527 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1218 22:48:04.436304   54527 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1218 22:48:04.436312   54527 kubeadm.go:322] 
	I1218 22:48:04.436361   54527 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1218 22:48:04.436435   54527 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1218 22:48:04.436503   54527 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1218 22:48:04.436511   54527 kubeadm.go:322] 
	I1218 22:48:04.436608   54527 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1218 22:48:04.436684   54527 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1218 22:48:04.436693   54527 kubeadm.go:322] 
	I1218 22:48:04.436773   54527 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fkmm67.22pf0utr2lawj4m2 \
	I1218 22:48:04.436876   54527 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c9557dd7f437ed4cd7be329c62a8a7ae9cbf7c397b86c56a297c9c177867a738 \
	I1218 22:48:04.436902   54527 kubeadm.go:322]     --control-plane 
	I1218 22:48:04.436911   54527 kubeadm.go:322] 
	I1218 22:48:04.436990   54527 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1218 22:48:04.436998   54527 kubeadm.go:322] 
	I1218 22:48:04.437075   54527 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fkmm67.22pf0utr2lawj4m2 \
	I1218 22:48:04.437176   54527 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:c9557dd7f437ed4cd7be329c62a8a7ae9cbf7c397b86c56a297c9c177867a738 
	I1218 22:48:04.439904   54527 kubeadm.go:322] W1218 22:47:44.717866    1688 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1218 22:48:04.440096   54527 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1218 22:48:04.440230   54527 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1218 22:48:04.440453   54527 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I1218 22:48:04.440579   54527 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1218 22:48:04.440713   54527 kubeadm.go:322] W1218 22:47:50.944793    1688 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1218 22:48:04.440842   54527 kubeadm.go:322] W1218 22:47:50.946651    1688 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1218 22:48:04.440883   54527 cni.go:84] Creating CNI manager for ""
	I1218 22:48:04.440903   54527 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1218 22:48:04.440991   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:04.441100   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2 minikube.k8s.io/name=ingress-addon-legacy-319045 minikube.k8s.io/updated_at=2023_12_18T22_48_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:04.441159   54527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 22:48:05.008467   54527 ops.go:34] apiserver oom_adj: -16
	I1218 22:48:05.008585   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:05.508678   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:06.009462   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:06.508632   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:07.008917   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:07.508654   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:08.008681   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:08.509516   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:09.008778   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:09.508779   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:10.008631   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:10.509485   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:11.009502   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:11.508664   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:12.009325   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:12.508818   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:13.008663   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:13.509604   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:14.009650   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:14.509502   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:15.008750   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:15.508655   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:16.008696   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:16.509506   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:17.008844   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:17.509400   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:18.009334   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:18.508699   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:19.008666   54527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1218 22:48:19.180133   54527 kubeadm.go:1088] duration metric: took 14.739185438s to wait for elevateKubeSystemPrivileges.
	I1218 22:48:19.180163   54527 kubeadm.go:406] StartCluster complete in 34.576394803s
	I1218 22:48:19.180180   54527 settings.go:142] acquiring lock: {Name:mkea14aac8a39c6a2ed200653e9b07ad1584eac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:48:19.180235   54527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:48:19.181025   54527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17822-2192/kubeconfig: {Name:mkf844e795bd9b2be73b36e3c1c24ce0924bf634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 22:48:19.181731   54527 kapi.go:59] client config for ingress-addon-legacy-319045: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.key", CAFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 22:48:19.182887   54527 config.go:182] Loaded profile config "ingress-addon-legacy-319045": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1218 22:48:19.182944   54527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 22:48:19.183082   54527 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 22:48:19.183175   54527 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-319045"
	I1218 22:48:19.183189   54527 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-319045"
	I1218 22:48:19.183251   54527 host.go:66] Checking if "ingress-addon-legacy-319045" exists ...
	I1218 22:48:19.183385   54527 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-319045"
	I1218 22:48:19.183408   54527 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-319045"
	I1218 22:48:19.183727   54527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-319045 --format={{.State.Status}}
	I1218 22:48:19.184050   54527 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 22:48:19.184523   54527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-319045 --format={{.State.Status}}
	I1218 22:48:19.244617   54527 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 22:48:19.242892   54527 kapi.go:59] client config for ingress-addon-legacy-319045: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.key", CAFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 22:48:19.246309   54527 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 22:48:19.246324   54527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1218 22:48:19.246382   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:48:19.246420   54527 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-319045"
	I1218 22:48:19.246449   54527 host.go:66] Checking if "ingress-addon-legacy-319045" exists ...
	I1218 22:48:19.246924   54527 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-319045 --format={{.State.Status}}
	I1218 22:48:19.274163   54527 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1218 22:48:19.274188   54527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1218 22:48:19.274250   54527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-319045
	I1218 22:48:19.281456   54527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa Username:docker}
	I1218 22:48:19.302814   54527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/ingress-addon-legacy-319045/id_rsa Username:docker}
	I1218 22:48:19.390153   54527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1218 22:48:19.501709   54527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1218 22:48:19.519203   54527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1218 22:48:19.702115   54527 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-319045" context rescaled to 1 replicas
	I1218 22:48:19.702199   54527 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 22:48:19.705657   54527 out.go:177] * Verifying Kubernetes components...
	I1218 22:48:19.707827   54527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 22:48:20.048663   54527 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1218 22:48:20.337520   54527 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1218 22:48:20.340328   54527 addons.go:502] enable addons completed in 1.15724137s: enabled=[default-storageclass storage-provisioner]
	I1218 22:48:20.338429   54527 kapi.go:59] client config for ingress-addon-legacy-319045: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt", KeyFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.key", CAFile:"/home/jenkins/minikube-integration/17822-2192/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bf310), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 22:48:20.340726   54527 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-319045" to be "Ready" ...
	I1218 22:48:20.344459   54527 node_ready.go:49] node "ingress-addon-legacy-319045" has status "Ready":"True"
	I1218 22:48:20.344505   54527 node_ready.go:38] duration metric: took 3.737538ms waiting for node "ingress-addon-legacy-319045" to be "Ready" ...
	I1218 22:48:20.344559   54527 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 22:48:20.353956   54527 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:22.359302   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:24.365706   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:26.859820   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:28.860317   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:31.359806   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:33.859564   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:36.359498   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:38.859316   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:41.359316   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:43.859595   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:46.359421   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:48.359626   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:50.359775   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:52.860313   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:55.360214   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:57.876304   54527 pod_ready.go:102] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"False"
	I1218 22:48:58.359907   54527 pod_ready.go:92] pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace has status "Ready":"True"
	I1218 22:48:58.359931   54527 pod_ready.go:81] duration metric: took 38.005918408s waiting for pod "coredns-66bff467f8-pwcvm" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.359949   54527 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-319045" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.364365   54527 pod_ready.go:92] pod "etcd-ingress-addon-legacy-319045" in "kube-system" namespace has status "Ready":"True"
	I1218 22:48:58.364390   54527 pod_ready.go:81] duration metric: took 4.433152ms waiting for pod "etcd-ingress-addon-legacy-319045" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.364400   54527 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-319045" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.368793   54527 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-319045" in "kube-system" namespace has status "Ready":"True"
	I1218 22:48:58.368818   54527 pod_ready.go:81] duration metric: took 4.411212ms waiting for pod "kube-apiserver-ingress-addon-legacy-319045" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.368829   54527 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-319045" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.373047   54527 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-319045" in "kube-system" namespace has status "Ready":"True"
	I1218 22:48:58.373069   54527 pod_ready.go:81] duration metric: took 4.210528ms waiting for pod "kube-controller-manager-ingress-addon-legacy-319045" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.373080   54527 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-csgz9" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.378480   54527 pod_ready.go:92] pod "kube-proxy-csgz9" in "kube-system" namespace has status "Ready":"True"
	I1218 22:48:58.378502   54527 pod_ready.go:81] duration metric: took 5.41686ms waiting for pod "kube-proxy-csgz9" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.378512   54527 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-319045" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.555935   54527 request.go:629] Waited for 177.318044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-319045
	I1218 22:48:58.755806   54527 request.go:629] Waited for 197.32162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-319045
	I1218 22:48:58.758590   54527 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-319045" in "kube-system" namespace has status "Ready":"True"
	I1218 22:48:58.758614   54527 pod_ready.go:81] duration metric: took 380.094055ms waiting for pod "kube-scheduler-ingress-addon-legacy-319045" in "kube-system" namespace to be "Ready" ...
	I1218 22:48:58.758626   54527 pod_ready.go:38] duration metric: took 38.414037534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 22:48:58.758654   54527 api_server.go:52] waiting for apiserver process to appear ...
	I1218 22:48:58.758725   54527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 22:48:58.771587   54527 api_server.go:72] duration metric: took 39.069341416s to wait for apiserver process to appear ...
	I1218 22:48:58.771611   54527 api_server.go:88] waiting for apiserver healthz status ...
	I1218 22:48:58.771628   54527 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1218 22:48:58.780336   54527 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1218 22:48:58.781406   54527 api_server.go:141] control plane version: v1.18.20
	I1218 22:48:58.781436   54527 api_server.go:131] duration metric: took 9.818964ms to wait for apiserver health ...
	I1218 22:48:58.781446   54527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 22:48:58.955805   54527 request.go:629] Waited for 174.296149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1218 22:48:58.961151   54527 system_pods.go:59] 7 kube-system pods found
	I1218 22:48:58.961187   54527 system_pods.go:61] "coredns-66bff467f8-pwcvm" [a9af1b73-fef6-4808-8303-3ac2295837d6] Running
	I1218 22:48:58.961197   54527 system_pods.go:61] "etcd-ingress-addon-legacy-319045" [b1d82aa7-3391-4861-8dea-53700d443e46] Running
	I1218 22:48:58.961203   54527 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-319045" [7b4125cf-5d92-4849-8925-43bdec84d1a8] Running
	I1218 22:48:58.961209   54527 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-319045" [79d6b8e8-c1f9-4b8f-a93d-ff766039c6dd] Running
	I1218 22:48:58.961214   54527 system_pods.go:61] "kube-proxy-csgz9" [b7179f0f-87eb-4da8-8443-59d010b7b4bc] Running
	I1218 22:48:58.961220   54527 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-319045" [16a2d6ac-8bb7-4bbb-92f8-1c3a768d8155] Running
	I1218 22:48:58.961225   54527 system_pods.go:61] "storage-provisioner" [cf053c64-30b4-4969-b7d8-9592cc35f455] Running
	I1218 22:48:58.961236   54527 system_pods.go:74] duration metric: took 179.784452ms to wait for pod list to return data ...
	I1218 22:48:58.961246   54527 default_sa.go:34] waiting for default service account to be created ...
	I1218 22:48:59.155652   54527 request.go:629] Waited for 194.328313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1218 22:48:59.158118   54527 default_sa.go:45] found service account: "default"
	I1218 22:48:59.158150   54527 default_sa.go:55] duration metric: took 196.891109ms for default service account to be created ...
	I1218 22:48:59.158159   54527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 22:48:59.355713   54527 request.go:629] Waited for 197.479342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1218 22:48:59.361365   54527 system_pods.go:86] 7 kube-system pods found
	I1218 22:48:59.361394   54527 system_pods.go:89] "coredns-66bff467f8-pwcvm" [a9af1b73-fef6-4808-8303-3ac2295837d6] Running
	I1218 22:48:59.361402   54527 system_pods.go:89] "etcd-ingress-addon-legacy-319045" [b1d82aa7-3391-4861-8dea-53700d443e46] Running
	I1218 22:48:59.361411   54527 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-319045" [7b4125cf-5d92-4849-8925-43bdec84d1a8] Running
	I1218 22:48:59.361417   54527 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-319045" [79d6b8e8-c1f9-4b8f-a93d-ff766039c6dd] Running
	I1218 22:48:59.361424   54527 system_pods.go:89] "kube-proxy-csgz9" [b7179f0f-87eb-4da8-8443-59d010b7b4bc] Running
	I1218 22:48:59.361436   54527 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-319045" [16a2d6ac-8bb7-4bbb-92f8-1c3a768d8155] Running
	I1218 22:48:59.361448   54527 system_pods.go:89] "storage-provisioner" [cf053c64-30b4-4969-b7d8-9592cc35f455] Running
	I1218 22:48:59.361454   54527 system_pods.go:126] duration metric: took 203.290504ms to wait for k8s-apps to be running ...
	I1218 22:48:59.361465   54527 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 22:48:59.361522   54527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 22:48:59.375070   54527 system_svc.go:56] duration metric: took 13.59537ms WaitForService to wait for kubelet.
	I1218 22:48:59.375141   54527 kubeadm.go:581] duration metric: took 39.672901157s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 22:48:59.375167   54527 node_conditions.go:102] verifying NodePressure condition ...
	I1218 22:48:59.555471   54527 request.go:629] Waited for 180.201599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1218 22:48:59.558282   54527 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1218 22:48:59.558312   54527 node_conditions.go:123] node cpu capacity is 2
	I1218 22:48:59.558323   54527 node_conditions.go:105] duration metric: took 183.150023ms to run NodePressure ...
	I1218 22:48:59.558335   54527 start.go:228] waiting for startup goroutines ...
	I1218 22:48:59.558341   54527 start.go:233] waiting for cluster config update ...
	I1218 22:48:59.558353   54527 start.go:242] writing updated cluster config ...
	I1218 22:48:59.558622   54527 ssh_runner.go:195] Run: rm -f paused
	I1218 22:48:59.623318   54527 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I1218 22:48:59.625727   54527 out.go:177] 
	W1218 22:48:59.627310   54527 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I1218 22:48:59.628945   54527 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1218 22:48:59.630661   54527 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-319045" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Dec 18 22:47:42 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:47:42.088351217Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Dec 18 22:47:42 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:47:42.088413675Z" level=info msg="Daemon has completed initialization"
	Dec 18 22:47:42 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:47:42.117675747Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 18 22:47:42 ingress-addon-legacy-319045 systemd[1]: Started Docker Application Container Engine.
	Dec 18 22:47:42 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:47:42.118509192Z" level=info msg="API listen on [::]:2376"
	Dec 18 22:49:01 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:01.140610355Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Dec 18 22:49:02 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:02.671886412Z" level=info msg="ignoring event" container=3fe7290d80c502709990880ac8c819ba0d932d43a12616fd54cf9a6c3fd3cf55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:02 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:02.707803679Z" level=info msg="ignoring event" container=a15770123faedda3c4cfa269dbc7593dcc949af4d319977ea94a7c21ffab8dcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:03 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:03.700171905Z" level=info msg="ignoring event" container=efc31423bce16eddef024f5761a41f642b52e291edfd90a38ab7a6d9763a3cc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:03 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:03.739065922Z" level=info msg="ignoring event" container=e3928350adfbadcf21eadd363e35efb0814011efa6da5d21a17b954d4992a62c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:04 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:04.694008942Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Dec 18 22:49:11 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:11.983193061Z" level=warning msg="Published ports are discarded when using host network mode"
	Dec 18 22:49:12 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:12.019639845Z" level=warning msg="Published ports are discarded when using host network mode"
	Dec 18 22:49:12 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:12.164833194Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Dec 18 22:49:18 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:18.180325561Z" level=info msg="ignoring event" container=34ea23f2eb6aa0232b0e51628f0c6113f6f0d55b43499eb55e2c14e5f5240938 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:18 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:18.957416165Z" level=info msg="ignoring event" container=dbc9edbad3f92049f6ba6f3d5c5fcd031393829636073734c33bdb568029692d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:32 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:32.180367227Z" level=info msg="ignoring event" container=1ef384bac243d42a6476fd9d9b9704761a2767eb256175588919de1c07a7386f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:33 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:33.419420144Z" level=info msg="ignoring event" container=0d5b9762e21af06defd3a4a0ab104b4579f77a3e8feaab94cfa3984cac7a4e2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:34 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:34.461115897Z" level=info msg="ignoring event" container=93e2adb4b620d99eed4653aeea230bce042e006d0573ea714265fdf35abf070a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:47 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:47.944284421Z" level=info msg="ignoring event" container=e35af21559032ef7095fad5fd13088b089b17e3cfece6621df3404fee0586878 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:49:50 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:49:50.041485992Z" level=info msg="ignoring event" container=73124cd7ef77e19ce8707c25776d5df85c0aa580dd8cbc7f580b94438d9d56b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:50:00 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:50:00.789169045Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=cbdeca99a98773564e9e22facceed672b34dcb74205eba39198c3c3636b698e0
	Dec 18 22:50:00 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:50:00.813212442Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=cbdeca99a98773564e9e22facceed672b34dcb74205eba39198c3c3636b698e0
	Dec 18 22:50:00 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:50:00.868621833Z" level=info msg="ignoring event" container=cbdeca99a98773564e9e22facceed672b34dcb74205eba39198c3c3636b698e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 22:50:00 ingress-addon-legacy-319045 dockerd[1306]: time="2023-12-18T22:50:00.933589444Z" level=info msg="ignoring event" container=d0c34a6d951a8a92005ce0d9df80747810daa52fb2a1948b84f3a6b7683594f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	73124cd7ef77e       dd1b12fcb6097                                                                                                      17 seconds ago       Exited              hello-world-app           2                   2ac93c235432e       hello-world-app-5f5d8b66bb-d6rzk
	188e17cb99197       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                      43 seconds ago       Running             nginx                     0                   e7150166d326d       nginx
	cbdeca99a9877       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   d0c34a6d951a8       ingress-nginx-controller-7fcf777cb7-crszk
	a15770123faed       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   efc31423bce16       ingress-nginx-admission-patch-gvn7j
	3fe7290d80c50       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   e3928350adfba       ingress-nginx-admission-create-wq49v
	d3802f356a922       66749159455b3                                                                                                      About a minute ago   Running             storage-provisioner       0                   cf68a7dfef64f       storage-provisioner
	4e207c2c048a6       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   8c191d3668bcc       coredns-66bff467f8-pwcvm
	2e925a604a46a       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   f6b139fa6d02f       kube-proxy-csgz9
	6442d9e052747       ab707b0a0ea33                                                                                                      2 minutes ago        Running             etcd                      0                   811cd6d780077       etcd-ingress-addon-legacy-319045
	f75795a9c4336       095f37015706d                                                                                                      2 minutes ago        Running             kube-scheduler            0                   2487c25cfe149       kube-scheduler-ingress-addon-legacy-319045
	6b05bc442faca       2694cf044d665                                                                                                      2 minutes ago        Running             kube-apiserver            0                   c39a2f85f0cea       kube-apiserver-ingress-addon-legacy-319045
	b6733751bb0fa       68a4fac29a865                                                                                                      2 minutes ago        Running             kube-controller-manager   0                   7e9b0a0704faf       kube-controller-manager-ingress-addon-legacy-319045
	
	* 
	* ==> coredns [4e207c2c048a] <==
	* [INFO] 172.17.0.1:58682 - 31614 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050536s
	[INFO] 172.17.0.1:58682 - 19065 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034881s
	[INFO] 172.17.0.1:58682 - 34237 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050479s
	[INFO] 172.17.0.1:58682 - 12342 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046171s
	[INFO] 172.17.0.1:58682 - 2057 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000911116s
	[INFO] 172.17.0.1:58682 - 46540 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000840354s
	[INFO] 172.17.0.1:58682 - 61913 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004371s
	[INFO] 172.17.0.1:22423 - 62256 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067275s
	[INFO] 172.17.0.1:21675 - 54812 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085679s
	[INFO] 172.17.0.1:21675 - 24046 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000045827s
	[INFO] 172.17.0.1:21675 - 40332 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043085s
	[INFO] 172.17.0.1:22423 - 28195 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044439s
	[INFO] 172.17.0.1:21675 - 55481 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031335s
	[INFO] 172.17.0.1:22423 - 30745 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004549s
	[INFO] 172.17.0.1:21675 - 12267 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030294s
	[INFO] 172.17.0.1:22423 - 65173 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033732s
	[INFO] 172.17.0.1:22423 - 61484 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003433s
	[INFO] 172.17.0.1:22423 - 31904 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040354s
	[INFO] 172.17.0.1:21675 - 54119 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077835s
	[INFO] 172.17.0.1:22423 - 51864 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001771328s
	[INFO] 172.17.0.1:21675 - 13938 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001252528s
	[INFO] 172.17.0.1:22423 - 42049 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001045091s
	[INFO] 172.17.0.1:21675 - 11664 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001118011s
	[INFO] 172.17.0.1:22423 - 27439 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048788s
	[INFO] 172.17.0.1:21675 - 1737 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050298s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-319045
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-319045
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0e9e83b1c53ca6148de644b5bd4ad0d762d0d5d2
	                    minikube.k8s.io/name=ingress-addon-legacy-319045
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T22_48_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 22:48:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-319045
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 22:49:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 22:49:38 +0000   Mon, 18 Dec 2023 22:47:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 22:49:38 +0000   Mon, 18 Dec 2023 22:47:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 22:49:38 +0000   Mon, 18 Dec 2023 22:47:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 22:49:38 +0000   Mon, 18 Dec 2023 22:48:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-319045
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 52fa5c0e7e464139991709e303f8b221
	  System UUID:                20f4d2fb-9ca7-4026-bbd4-81294cadcc0d
	  Boot ID:                    90ea92e2-9dcb-495c-affc-7a21f948b8bd
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-d6rzk                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 coredns-66bff467f8-pwcvm                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     107s
	  kube-system                 etcd-ingress-addon-legacy-319045                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-apiserver-ingress-addon-legacy-319045             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-319045    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-csgz9                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-ingress-addon-legacy-319045             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m12s (x4 over 2m13s)  kubelet     Node ingress-addon-legacy-319045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x4 over 2m13s)  kubelet     Node ingress-addon-legacy-319045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x4 over 2m13s)  kubelet     Node ingress-addon-legacy-319045 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s                   kubelet     Node ingress-addon-legacy-319045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s                   kubelet     Node ingress-addon-legacy-319045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s                   kubelet     Node ingress-addon-legacy-319045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                108s                   kubelet     Node ingress-addon-legacy-319045 status is now: NodeReady
	  Normal  Starting                 106s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000745] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=0000000026ceb612{9p.inode} n=00000000055811d4
	[  +0.001124] FS-Cache: N-key=[8] 'a16ced0000000000'
	[  +0.002234] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001061] FS-Cache: O-cookie d=0000000026ceb612{9p.inode} n=000000004c37f81d
	[  +0.001101] FS-Cache: O-key=[8] 'a16ced0000000000'
	[  +0.000747] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000977] FS-Cache: N-cookie d=0000000026ceb612{9p.inode} n=00000000eae1a5dc
	[  +0.001119] FS-Cache: N-key=[8] 'a16ced0000000000'
	[  +2.695718] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001028] FS-Cache: O-cookie d=0000000026ceb612{9p.inode} n=000000006f811f52
	[  +0.001128] FS-Cache: O-key=[8] 'a06ced0000000000'
	[  +0.000781] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001009] FS-Cache: N-cookie d=0000000026ceb612{9p.inode} n=00000000055811d4
	[  +0.001108] FS-Cache: N-key=[8] 'a06ced0000000000'
	[  +0.455587] FS-Cache: Duplicate cookie detected
	[  +0.000743] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001037] FS-Cache: O-cookie d=0000000026ceb612{9p.inode} n=000000007296ded5
	[  +0.001192] FS-Cache: O-key=[8] 'a86ced0000000000'
	[  +0.000768] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=0000000026ceb612{9p.inode} n=00000000c359d411
	[  +0.001091] FS-Cache: N-key=[8] 'a86ced0000000000'
	[Dec18 22:47] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [6442d9e05274] <==
	* raft2023/12/18 22:47:55 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/18 22:47:55 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/18 22:47:55 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/18 22:47:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-18 22:47:55.919896 W | auth: simple token is not cryptographically signed
	2023-12-18 22:47:55.928583 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-18 22:47:55.931470 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-18 22:47:55.931761 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-18 22:47:55.932088 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-18 22:47:55.932628 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/12/18 22:47:55 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-18 22:47:55.933132 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/18 22:47:56 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/18 22:47:56 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/18 22:47:56 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/18 22:47:56 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/18 22:47:56 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-18 22:47:56.111500 I | etcdserver: published {Name:ingress-addon-legacy-319045 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-18 22:47:56.159722 I | embed: ready to serve client requests
	2023-12-18 22:47:56.206098 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-18 22:47:56.206488 I | embed: ready to serve client requests
	2023-12-18 22:47:56.207953 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-18 22:47:56.208746 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-18 22:47:56.228559 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-18 22:47:56.228848 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  22:50:06 up 32 min,  0 users,  load average: 1.28, 1.93, 1.19
	Linux ingress-addon-legacy-319045 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [6b05bc442fac] <==
	* I1218 22:48:01.188561       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1218 22:48:01.317173       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1218 22:48:01.318152       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1218 22:48:01.319443       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1218 22:48:01.321946       1 cache.go:39] Caches are synced for autoregister controller
	I1218 22:48:01.392958       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1218 22:48:02.114741       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1218 22:48:02.114775       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1218 22:48:02.127920       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1218 22:48:02.134892       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1218 22:48:02.134913       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1218 22:48:02.557077       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1218 22:48:02.595956       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1218 22:48:02.684142       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1218 22:48:02.685206       1 controller.go:609] quota admission added evaluator for: endpoints
	I1218 22:48:02.688816       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1218 22:48:03.553569       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1218 22:48:04.328372       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1218 22:48:04.421246       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1218 22:48:07.823641       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1218 22:48:19.376456       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1218 22:48:19.531693       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1218 22:49:00.462596       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1218 22:49:20.751685       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1218 22:49:58.793217       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [b6733751bb0f] <==
	* I1218 22:48:19.528287       1 shared_informer.go:230] Caches are synced for deployment 
	I1218 22:48:19.540205       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c1a21870-3675-4ab4-af60-b4f4fb59d80b", APIVersion:"apps/v1", ResourceVersion:"323", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I1218 22:48:19.551650       1 shared_informer.go:230] Caches are synced for disruption 
	I1218 22:48:19.551670       1 disruption.go:339] Sending events to api server.
	I1218 22:48:19.562801       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"792dd5c9-29da-4dc2-a0f7-61d6dbc40ff0", APIVersion:"apps/v1", ResourceVersion:"338", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-pwcvm
	I1218 22:48:19.585461       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1218 22:48:19.614180       1 shared_informer.go:230] Caches are synced for expand 
	I1218 22:48:19.629199       1 shared_informer.go:230] Caches are synced for PV protection 
	I1218 22:48:19.646969       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1218 22:48:19.668651       1 shared_informer.go:230] Caches are synced for attach detach 
	I1218 22:48:19.691942       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1218 22:48:19.691967       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1218 22:48:19.712406       1 shared_informer.go:230] Caches are synced for resource quota 
	I1218 22:48:19.719100       1 shared_informer.go:230] Caches are synced for resource quota 
	I1218 22:48:19.793105       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1218 22:48:19.793151       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1218 22:49:00.446082       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3eb8aff7-f217-4983-8e7d-3c888cd21f83", APIVersion:"apps/v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1218 22:49:00.472891       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"1d6ce9ee-fb1b-4118-9ba1-ff89c7bd290f", APIVersion:"apps/v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-crszk
	I1218 22:49:00.523095       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b6e06139-261c-47d8-abcf-7f0e34899904", APIVersion:"batch/v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-wq49v
	I1218 22:49:00.579876       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"89a67a6e-8a9f-45b3-84c3-758bd94a06cc", APIVersion:"batch/v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-gvn7j
	I1218 22:49:03.666098       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"89a67a6e-8a9f-45b3-84c3-758bd94a06cc", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1218 22:49:03.693763       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b6e06139-261c-47d8-abcf-7f0e34899904", APIVersion:"batch/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1218 22:49:30.531133       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"57bbdb7f-66dc-4be6-950c-9659180766cf", APIVersion:"apps/v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1218 22:49:30.534454       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"71440ef6-79ae-4ffc-9194-6f2dfc8a32ce", APIVersion:"apps/v1", ResourceVersion:"581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-d6rzk
	E1218 22:50:03.513344       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-d7h6t" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [2e925a604a46] <==
	* W1218 22:48:20.493655       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1218 22:48:20.505758       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1218 22:48:20.505893       1 server_others.go:186] Using iptables Proxier.
	I1218 22:48:20.506380       1 server.go:583] Version: v1.18.20
	I1218 22:48:20.510080       1 config.go:315] Starting service config controller
	I1218 22:48:20.510183       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1218 22:48:20.510519       1 config.go:133] Starting endpoints config controller
	I1218 22:48:20.510666       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1218 22:48:20.610658       1 shared_informer.go:230] Caches are synced for service config 
	I1218 22:48:20.610905       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [f75795a9c433] <==
	* I1218 22:48:01.300036       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1218 22:48:01.300065       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1218 22:48:01.304687       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1218 22:48:01.305918       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 22:48:01.306068       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 22:48:01.306180       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1218 22:48:01.309792       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 22:48:01.309875       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1218 22:48:01.310589       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1218 22:48:01.310674       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 22:48:01.311346       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 22:48:01.311475       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 22:48:01.312321       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1218 22:48:01.314826       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 22:48:01.315075       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 22:48:01.315269       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 22:48:01.315460       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 22:48:01.315648       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1218 22:48:02.131206       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 22:48:02.132733       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 22:48:02.183831       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1218 22:48:02.192788       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1218 22:48:02.226580       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 22:48:02.331499       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1218 22:48:04.706245       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Dec 18 22:49:36 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:49:36.359747    2871 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 93e2adb4b620d99eed4653aeea230bce042e006d0573ea714265fdf35abf070a
	Dec 18 22:49:36 ingress-addon-legacy-319045 kubelet[2871]: E1218 22:49:36.360119    2871 pod_workers.go:191] Error syncing pod bbbf0a61-17f9-401c-a5d4-8d65738fa095 ("hello-world-app-5f5d8b66bb-d6rzk_default(bbbf0a61-17f9-401c-a5d4-8d65738fa095)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-d6rzk_default(bbbf0a61-17f9-401c-a5d4-8d65738fa095)"
	Dec 18 22:49:46 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:49:46.401434    2871 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-gd62g" (UniqueName: "kubernetes.io/secret/7d28ab76-809d-4915-8a2e-3d4040a30284-minikube-ingress-dns-token-gd62g") pod "7d28ab76-809d-4915-8a2e-3d4040a30284" (UID: "7d28ab76-809d-4915-8a2e-3d4040a30284")
	Dec 18 22:49:46 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:49:46.405426    2871 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d28ab76-809d-4915-8a2e-3d4040a30284-minikube-ingress-dns-token-gd62g" (OuterVolumeSpecName: "minikube-ingress-dns-token-gd62g") pod "7d28ab76-809d-4915-8a2e-3d4040a30284" (UID: "7d28ab76-809d-4915-8a2e-3d4040a30284"). InnerVolumeSpecName "minikube-ingress-dns-token-gd62g". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 22:49:46 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:49:46.501780    2871 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-gd62g" (UniqueName: "kubernetes.io/secret/7d28ab76-809d-4915-8a2e-3d4040a30284-minikube-ingress-dns-token-gd62g") on node "ingress-addon-legacy-319045" DevicePath ""
	Dec 18 22:49:48 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:49:48.455861    2871 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1ef384bac243d42a6476fd9d9b9704761a2767eb256175588919de1c07a7386f
	Dec 18 22:49:49 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:49:49.910619    2871 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 93e2adb4b620d99eed4653aeea230bce042e006d0573ea714265fdf35abf070a
	Dec 18 22:49:50 ingress-addon-legacy-319045 kubelet[2871]: W1218 22:49:50.070762    2871 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podbbbf0a61-17f9-401c-a5d4-8d65738fa095/73124cd7ef77e19ce8707c25776d5df85c0aa580dd8cbc7f580b94438d9d56b6": none of the resources are being tracked.
	Dec 18 22:49:50 ingress-addon-legacy-319045 kubelet[2871]: W1218 22:49:50.473681    2871 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-d6rzk through plugin: invalid network status for
	Dec 18 22:49:50 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:49:50.478631    2871 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 93e2adb4b620d99eed4653aeea230bce042e006d0573ea714265fdf35abf070a
	Dec 18 22:49:50 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:49:50.478935    2871 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 73124cd7ef77e19ce8707c25776d5df85c0aa580dd8cbc7f580b94438d9d56b6
	Dec 18 22:49:50 ingress-addon-legacy-319045 kubelet[2871]: E1218 22:49:50.479160    2871 pod_workers.go:191] Error syncing pod bbbf0a61-17f9-401c-a5d4-8d65738fa095 ("hello-world-app-5f5d8b66bb-d6rzk_default(bbbf0a61-17f9-401c-a5d4-8d65738fa095)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-d6rzk_default(bbbf0a61-17f9-401c-a5d4-8d65738fa095)"
	Dec 18 22:49:51 ingress-addon-legacy-319045 kubelet[2871]: W1218 22:49:51.487153    2871 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-d6rzk through plugin: invalid network status for
	Dec 18 22:49:58 ingress-addon-legacy-319045 kubelet[2871]: E1218 22:49:58.777816    2871 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-crszk.17a20eb973e16625", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-crszk", UID:"07562c0d-e5cc-42a3-8415-8c92fa9fa386", APIVersion:"v1", ResourceVersion:"456", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-319045"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1585105ae370a25, ext:114507674570, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1585105ae370a25, ext:114507674570, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-crszk.17a20eb973e16625" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 18 22:49:58 ingress-addon-legacy-319045 kubelet[2871]: E1218 22:49:58.813587    2871 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-crszk.17a20eb973e16625", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-crszk", UID:"07562c0d-e5cc-42a3-8415-8c92fa9fa386", APIVersion:"v1", ResourceVersion:"456", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-319045"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1585105ae370a25, ext:114507674570, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1585105af68e679, ext:114527719462, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-crszk.17a20eb973e16625" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 18 22:50:01 ingress-addon-legacy-319045 kubelet[2871]: W1218 22:50:01.559370    2871 pod_container_deletor.go:77] Container "d0c34a6d951a8a92005ce0d9df80747810daa52fb2a1948b84f3a6b7683594f0" not found in pod's containers
	Dec 18 22:50:02 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:50:02.937683    2871 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/07562c0d-e5cc-42a3-8415-8c92fa9fa386-webhook-cert") pod "07562c0d-e5cc-42a3-8415-8c92fa9fa386" (UID: "07562c0d-e5cc-42a3-8415-8c92fa9fa386")
	Dec 18 22:50:02 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:50:02.937729    2871 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-4p2q5" (UniqueName: "kubernetes.io/secret/07562c0d-e5cc-42a3-8415-8c92fa9fa386-ingress-nginx-token-4p2q5") pod "07562c0d-e5cc-42a3-8415-8c92fa9fa386" (UID: "07562c0d-e5cc-42a3-8415-8c92fa9fa386")
	Dec 18 22:50:02 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:50:02.943552    2871 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07562c0d-e5cc-42a3-8415-8c92fa9fa386-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "07562c0d-e5cc-42a3-8415-8c92fa9fa386" (UID: "07562c0d-e5cc-42a3-8415-8c92fa9fa386"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 22:50:02 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:50:02.944136    2871 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07562c0d-e5cc-42a3-8415-8c92fa9fa386-ingress-nginx-token-4p2q5" (OuterVolumeSpecName: "ingress-nginx-token-4p2q5") pod "07562c0d-e5cc-42a3-8415-8c92fa9fa386" (UID: "07562c0d-e5cc-42a3-8415-8c92fa9fa386"). InnerVolumeSpecName "ingress-nginx-token-4p2q5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 18 22:50:03 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:50:03.038054    2871 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/07562c0d-e5cc-42a3-8415-8c92fa9fa386-webhook-cert") on node "ingress-addon-legacy-319045" DevicePath ""
	Dec 18 22:50:03 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:50:03.038101    2871 reconciler.go:319] Volume detached for volume "ingress-nginx-token-4p2q5" (UniqueName: "kubernetes.io/secret/07562c0d-e5cc-42a3-8415-8c92fa9fa386-ingress-nginx-token-4p2q5") on node "ingress-addon-legacy-319045" DevicePath ""
	Dec 18 22:50:03 ingress-addon-legacy-319045 kubelet[2871]: W1218 22:50:03.920092    2871 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/07562c0d-e5cc-42a3-8415-8c92fa9fa386/volumes" does not exist
	Dec 18 22:50:05 ingress-addon-legacy-319045 kubelet[2871]: I1218 22:50:05.907044    2871 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 73124cd7ef77e19ce8707c25776d5df85c0aa580dd8cbc7f580b94438d9d56b6
	Dec 18 22:50:05 ingress-addon-legacy-319045 kubelet[2871]: E1218 22:50:05.907353    2871 pod_workers.go:191] Error syncing pod bbbf0a61-17f9-401c-a5d4-8d65738fa095 ("hello-world-app-5f5d8b66bb-d6rzk_default(bbbf0a61-17f9-401c-a5d4-8d65738fa095)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-d6rzk_default(bbbf0a61-17f9-401c-a5d4-8d65738fa095)"
	
	* 
	* ==> storage-provisioner [d3802f356a92] <==
	* I1218 22:48:20.907805       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1218 22:48:20.919642       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1218 22:48:20.919722       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1218 22:48:20.927807       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1218 22:48:20.928854       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-319045_e02e4437-e92e-4041-9a3d-a6e47fd7b06a!
	I1218 22:48:20.930541       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6fb64a7-2362-4db4-9732-950d12df8753", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-319045_e02e4437-e92e-4041-9a3d-a6e47fd7b06a became leader
	I1218 22:48:21.030096       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-319045_e02e4437-e92e-4041-9a3d-a6e47fd7b06a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-319045 -n ingress-addon-legacy-319045
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-319045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.63s)

                                                
                                    

Test pass (301/330)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.19
10 TestDownloadOnly/v1.28.4/json-events 12.09
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.09
17 TestDownloadOnly/v1.29.0-rc.2/json-events 4.39
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.24
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
26 TestBinaryMirror 0.64
27 TestOffline 58.2
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.11
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.11
32 TestAddons/Setup 142.83
34 TestAddons/parallel/Registry 16.39
36 TestAddons/parallel/InspektorGadget 10.79
37 TestAddons/parallel/MetricsServer 6.76
40 TestAddons/parallel/CSI 45.05
41 TestAddons/parallel/Headlamp 10.43
42 TestAddons/parallel/CloudSpanner 5.54
43 TestAddons/parallel/LocalPath 53.48
44 TestAddons/parallel/NvidiaDevicePlugin 6.48
47 TestAddons/serial/GCPAuth/Namespaces 0.17
48 TestAddons/StoppedEnableDisable 11.1
49 TestCertOptions 35.19
50 TestCertExpiration 252.09
51 TestDockerFlags 33.98
52 TestForceSystemdFlag 38.1
53 TestForceSystemdEnv 44.07
59 TestErrorSpam/setup 32.11
60 TestErrorSpam/start 0.83
61 TestErrorSpam/status 1.12
62 TestErrorSpam/pause 1.45
63 TestErrorSpam/unpause 1.53
64 TestErrorSpam/stop 2.1
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 84.75
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 36.63
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.08
76 TestFunctional/serial/CacheCmd/cache/add_local 1
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.09
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
81 TestFunctional/serial/CacheCmd/cache/delete 0.16
82 TestFunctional/serial/MinikubeKubectlCmd 0.17
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
84 TestFunctional/serial/ExtraConfig 39.88
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.23
87 TestFunctional/serial/LogsFileCmd 1.24
88 TestFunctional/serial/InvalidService 4.79
90 TestFunctional/parallel/ConfigCmd 0.57
91 TestFunctional/parallel/DashboardCmd 13.95
92 TestFunctional/parallel/DryRun 0.72
93 TestFunctional/parallel/InternationalLanguage 0.34
94 TestFunctional/parallel/StatusCmd 1.32
98 TestFunctional/parallel/ServiceCmdConnect 7.77
99 TestFunctional/parallel/AddonsCmd 0.22
100 TestFunctional/parallel/PersistentVolumeClaim 26.84
102 TestFunctional/parallel/SSHCmd 0.78
103 TestFunctional/parallel/CpCmd 2.31
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 2.32
110 TestFunctional/parallel/NodeLabels 0.1
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/Version/short 0.08
116 TestFunctional/parallel/Version/components 0.91
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.99
122 TestFunctional/parallel/ImageCommands/Setup 2.58
123 TestFunctional/parallel/DockerEnv/bash 1.36
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.26
128 TestFunctional/parallel/ServiceCmd/DeployApp 12.28
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.88
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.93
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.9
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.34
134 TestFunctional/parallel/ServiceCmd/List 0.47
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.24
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.62
138 TestFunctional/parallel/ServiceCmd/Format 0.51
139 TestFunctional/parallel/ServiceCmd/URL 0.62
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
152 TestFunctional/parallel/ProfileCmd/profile_list 0.44
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
154 TestFunctional/parallel/MountCmd/any-port 7.85
155 TestFunctional/parallel/MountCmd/specific-port 1.71
156 TestFunctional/parallel/MountCmd/VerifyCleanup 2.87
157 TestFunctional/delete_addon-resizer_images 0.08
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestImageBuild/serial/Setup 34.23
164 TestImageBuild/serial/NormalBuild 1.67
165 TestImageBuild/serial/BuildWithBuildArg 0.88
166 TestImageBuild/serial/BuildWithDockerIgnore 0.75
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.74
170 TestIngressAddonLegacy/StartLegacyK8sCluster 104.49
172 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.44
173 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
177 TestJSONOutput/start/Command 58.72
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.65
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.55
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 10.91
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.26
202 TestKicCustomNetwork/create_custom_network 36.19
203 TestKicCustomNetwork/use_default_bridge_network 38.32
204 TestKicExistingNetwork 34.29
205 TestKicCustomSubnet 34.41
206 TestKicStaticIP 33.88
207 TestMainNoArgs 0.06
208 TestMinikubeProfile 70.04
211 TestMountStart/serial/StartWithMountFirst 7.98
212 TestMountStart/serial/VerifyMountFirst 0.29
213 TestMountStart/serial/StartWithMountSecond 10.11
214 TestMountStart/serial/VerifyMountSecond 0.29
215 TestMountStart/serial/DeleteFirst 1.51
216 TestMountStart/serial/VerifyMountPostDelete 0.29
217 TestMountStart/serial/Stop 1.21
218 TestMountStart/serial/RestartStopped 8.8
219 TestMountStart/serial/VerifyMountPostStop 0.31
222 TestMultiNode/serial/FreshStart2Nodes 81.05
223 TestMultiNode/serial/DeployApp2Nodes 49.43
224 TestMultiNode/serial/PingHostFrom2Pods 1.09
225 TestMultiNode/serial/AddNode 16.96
226 TestMultiNode/serial/MultiNodeLabels 0.1
227 TestMultiNode/serial/ProfileList 0.37
228 TestMultiNode/serial/CopyFile 11.36
229 TestMultiNode/serial/StopNode 2.42
230 TestMultiNode/serial/StartAfterStop 13.82
231 TestMultiNode/serial/RestartKeepsNodes 123.2
232 TestMultiNode/serial/DeleteNode 5.09
233 TestMultiNode/serial/StopMultiNode 21.72
234 TestMultiNode/serial/RestartMultiNode 83.55
235 TestMultiNode/serial/ValidateNameConflict 35
240 TestPreload 128.77
242 TestScheduledStopUnix 106.67
243 TestSkaffold 104.94
245 TestInsufficientStorage 14.33
246 TestRunningBinaryUpgrade 91.6
248 TestKubernetesUpgrade 421.33
249 TestMissingContainerUpgrade 199.35
251 TestPause/serial/Start 96.48
252 TestPause/serial/SecondStartNoReconfiguration 36.7
253 TestPause/serial/Pause 0.95
254 TestPause/serial/VerifyStatus 0.43
255 TestPause/serial/Unpause 0.74
256 TestPause/serial/PauseAgain 1.14
257 TestPause/serial/DeletePaused 2.16
258 TestPause/serial/VerifyDeletedResources 0.48
259 TestStoppedBinaryUpgrade/Setup 0.95
260 TestStoppedBinaryUpgrade/Upgrade 88.62
261 TestStoppedBinaryUpgrade/MinikubeLogs 1.82
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
271 TestNoKubernetes/serial/StartWithK8s 40.89
272 TestNoKubernetes/serial/StartWithStopK8s 18.1
273 TestNoKubernetes/serial/Start 11.51
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.5
275 TestNoKubernetes/serial/ProfileList 5.11
276 TestNoKubernetes/serial/Stop 1.36
277 TestNoKubernetes/serial/StartNoArgs 8.28
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
291 TestStartStop/group/old-k8s-version/serial/FirstStart 131.39
292 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.58
294 TestStartStop/group/old-k8s-version/serial/Stop 11.67
296 TestStartStop/group/no-preload/serial/FirstStart 66.36
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
298 TestStartStop/group/old-k8s-version/serial/SecondStart 450.45
299 TestStartStop/group/no-preload/serial/DeployApp 9.37
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
301 TestStartStop/group/no-preload/serial/Stop 10.95
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
303 TestStartStop/group/no-preload/serial/SecondStart 349.87
304 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.01
305 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
306 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
307 TestStartStop/group/no-preload/serial/Pause 3.09
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/embed-certs/serial/FirstStart 95.52
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
313 TestStartStop/group/old-k8s-version/serial/Pause 3.97
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 94.85
316 TestStartStop/group/embed-certs/serial/DeployApp 9.36
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
318 TestStartStop/group/embed-certs/serial/Stop 11.03
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
321 TestStartStop/group/embed-certs/serial/SecondStart 321.13
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.43
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.24
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 357.04
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
329 TestStartStop/group/embed-certs/serial/Pause 4.68
331 TestStartStop/group/newest-cni/serial/FirstStart 60.05
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.12
336 TestNetworkPlugins/group/auto/Start 90.87
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.46
339 TestStartStop/group/newest-cni/serial/Stop 11.19
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
341 TestStartStop/group/newest-cni/serial/SecondStart 35.61
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
345 TestStartStop/group/newest-cni/serial/Pause 3.15
346 TestNetworkPlugins/group/kindnet/Start 62.98
347 TestNetworkPlugins/group/auto/KubeletFlags 0.48
348 TestNetworkPlugins/group/auto/NetCatPod 13.44
349 TestNetworkPlugins/group/auto/DNS 0.19
350 TestNetworkPlugins/group/auto/Localhost 0.22
351 TestNetworkPlugins/group/auto/HairPin 0.2
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/Start 86.59
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
355 TestNetworkPlugins/group/kindnet/NetCatPod 10.39
356 TestNetworkPlugins/group/kindnet/DNS 0.23
357 TestNetworkPlugins/group/kindnet/Localhost 0.26
358 TestNetworkPlugins/group/kindnet/HairPin 0.21
359 TestNetworkPlugins/group/custom-flannel/Start 72.95
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/calico/KubeletFlags 0.39
362 TestNetworkPlugins/group/calico/NetCatPod 11.31
363 TestNetworkPlugins/group/calico/DNS 0.28
364 TestNetworkPlugins/group/calico/Localhost 0.26
365 TestNetworkPlugins/group/calico/HairPin 0.25
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.61
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.39
368 TestNetworkPlugins/group/custom-flannel/DNS 0.25
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.26
371 TestNetworkPlugins/group/false/Start 94.3
372 TestNetworkPlugins/group/enable-default-cni/Start 90.67
373 TestNetworkPlugins/group/false/KubeletFlags 0.35
374 TestNetworkPlugins/group/false/NetCatPod 10.26
375 TestNetworkPlugins/group/false/DNS 0.21
376 TestNetworkPlugins/group/false/Localhost 0.18
377 TestNetworkPlugins/group/false/HairPin 0.17
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
380 TestNetworkPlugins/group/flannel/Start 71.34
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
384 TestNetworkPlugins/group/bridge/Start 92.3
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/flannel/KubeletFlags 0.5
387 TestNetworkPlugins/group/flannel/NetCatPod 11.36
388 TestNetworkPlugins/group/flannel/DNS 0.19
389 TestNetworkPlugins/group/flannel/Localhost 0.16
390 TestNetworkPlugins/group/flannel/HairPin 0.19
391 TestNetworkPlugins/group/kubenet/Start 90.17
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.61
393 TestNetworkPlugins/group/bridge/NetCatPod 11.49
394 TestNetworkPlugins/group/bridge/DNS 0.23
395 TestNetworkPlugins/group/bridge/Localhost 0.19
396 TestNetworkPlugins/group/bridge/HairPin 0.22
397 TestNetworkPlugins/group/kubenet/KubeletFlags 0.33
398 TestNetworkPlugins/group/kubenet/NetCatPod 10.25
399 TestNetworkPlugins/group/kubenet/DNS 0.18
400 TestNetworkPlugins/group/kubenet/Localhost 0.17
401 TestNetworkPlugins/group/kubenet/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (14.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-196183 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-196183 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (14.854274205s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-196183
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-196183: exit status 85 (191.622209ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-196183 | jenkins | v1.32.0 | 18 Dec 23 22:36 UTC |          |
	|         | -p download-only-196183        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:36:41
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:36:41.259269    7494 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:36:41.259473    7494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:36:41.259503    7494 out.go:309] Setting ErrFile to fd 2...
	I1218 22:36:41.259524    7494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:36:41.259800    7494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	W1218 22:36:41.259974    7494 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-2192/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-2192/.minikube/config/config.json: no such file or directory
	I1218 22:36:41.260448    7494 out.go:303] Setting JSON to true
	I1218 22:36:41.261229    7494 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1151,"bootTime":1702937851,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1218 22:36:41.261324    7494 start.go:138] virtualization:  
	I1218 22:36:41.264377    7494 out.go:97] [download-only-196183] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 22:36:41.266834    7494 out.go:169] MINIKUBE_LOCATION=17822
	W1218 22:36:41.264601    7494 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball: no such file or directory
	I1218 22:36:41.264659    7494 notify.go:220] Checking for updates...
	I1218 22:36:41.269034    7494 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:36:41.271109    7494 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:36:41.272990    7494 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	I1218 22:36:41.274992    7494 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 22:36:41.278640    7494 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 22:36:41.278911    7494 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:36:41.302855    7494 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 22:36:41.302962    7494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:36:41.672408    7494 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-18 22:36:41.66268036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:36:41.672514    7494 docker.go:295] overlay module found
	I1218 22:36:41.674327    7494 out.go:97] Using the docker driver based on user configuration
	I1218 22:36:41.674350    7494 start.go:298] selected driver: docker
	I1218 22:36:41.674364    7494 start.go:902] validating driver "docker" against <nil>
	I1218 22:36:41.674464    7494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:36:41.749224    7494 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-12-18 22:36:41.740175057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:36:41.749375    7494 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 22:36:41.749734    7494 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1218 22:36:41.749903    7494 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 22:36:41.752051    7494 out.go:169] Using Docker driver with root privileges
	I1218 22:36:41.753769    7494 cni.go:84] Creating CNI manager for ""
	I1218 22:36:41.753791    7494 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1218 22:36:41.753804    7494 start_flags.go:323] config:
	{Name:download-only-196183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-196183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:36:41.755650    7494 out.go:97] Starting control plane node download-only-196183 in cluster download-only-196183
	I1218 22:36:41.755669    7494 cache.go:121] Beginning downloading kic base image for docker with docker
	I1218 22:36:41.757325    7494 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1218 22:36:41.757354    7494 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1218 22:36:41.757499    7494 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 22:36:41.774277    7494 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 22:36:41.774467    7494 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 22:36:41.774567    7494 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 22:36:41.832848    7494 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1218 22:36:41.832874    7494 cache.go:56] Caching tarball of preloaded images
	I1218 22:36:41.833027    7494 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1218 22:36:41.835719    7494 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1218 22:36:41.835743    7494 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1218 22:36:41.947475    7494 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1218 22:36:50.627109    7494 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1218 22:36:50.627237    7494 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-196183"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-196183 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-196183 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.093401542s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-196183
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-196183: exit status 85 (86.989143ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-196183 | jenkins | v1.32.0 | 18 Dec 23 22:36 UTC |          |
	|         | -p download-only-196183        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-196183 | jenkins | v1.32.0 | 18 Dec 23 22:36 UTC |          |
	|         | -p download-only-196183        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:36:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:36:56.318314    7569 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:36:56.318487    7569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:36:56.318494    7569 out.go:309] Setting ErrFile to fd 2...
	I1218 22:36:56.318501    7569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:36:56.318810    7569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	W1218 22:36:56.318931    7569 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-2192/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-2192/.minikube/config/config.json: no such file or directory
	I1218 22:36:56.319172    7569 out.go:303] Setting JSON to true
	I1218 22:36:56.319858    7569 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1166,"bootTime":1702937851,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1218 22:36:56.319928    7569 start.go:138] virtualization:  
	I1218 22:36:56.329132    7569 out.go:97] [download-only-196183] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 22:36:56.329457    7569 notify.go:220] Checking for updates...
	I1218 22:36:56.337681    7569 out.go:169] MINIKUBE_LOCATION=17822
	I1218 22:36:56.345519    7569 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:36:56.354842    7569 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:36:56.362872    7569 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	I1218 22:36:56.367648    7569 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1218 22:36:56.387693    7569 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 22:36:56.388182    7569 config.go:182] Loaded profile config "download-only-196183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1218 22:36:56.388262    7569 start.go:810] api.Load failed for download-only-196183: filestore "download-only-196183": Docker machine "download-only-196183" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 22:36:56.388363    7569 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 22:36:56.388389    7569 start.go:810] api.Load failed for download-only-196183: filestore "download-only-196183": Docker machine "download-only-196183" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 22:36:56.421685    7569 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 22:36:56.421804    7569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:36:56.513745    7569 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-18 22:36:56.504307712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:36:56.513839    7569 docker.go:295] overlay module found
	I1218 22:36:56.527024    7569 out.go:97] Using the docker driver based on existing profile
	I1218 22:36:56.527055    7569 start.go:298] selected driver: docker
	I1218 22:36:56.527063    7569 start.go:902] validating driver "docker" against &{Name:download-only-196183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-196183 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:36:56.527250    7569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:36:56.611482    7569 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-12-18 22:36:56.602548405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:36:56.611955    7569 cni.go:84] Creating CNI manager for ""
	I1218 22:36:56.611980    7569 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 22:36:56.611997    7569 start_flags.go:323] config:
	{Name:download-only-196183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-196183 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GP
Us:}
	I1218 22:36:56.621824    7569 out.go:97] Starting control plane node download-only-196183 in cluster download-only-196183
	I1218 22:36:56.621852    7569 cache.go:121] Beginning downloading kic base image for docker with docker
	I1218 22:36:56.631616    7569 out.go:97] Pulling base image v0.0.42-1702920864-17822 ...
	I1218 22:36:56.631645    7569 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 22:36:56.631692    7569 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local docker daemon
	I1218 22:36:56.648474    7569 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 to local cache
	I1218 22:36:56.648630    7569 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory
	I1218 22:36:56.648650    7569 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 in local cache directory, skipping pull
	I1218 22:36:56.648655    7569 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 exists in cache, skipping pull
	I1218 22:36:56.648663    7569 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 as a tarball
	I1218 22:36:56.695724    7569 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1218 22:36:56.695746    7569 cache.go:56] Caching tarball of preloaded images
	I1218 22:36:56.695900    7569 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 22:36:56.706252    7569 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1218 22:36:56.706286    7569 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1218 22:36:56.822437    7569 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1218 22:37:04.322162    7569 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1218 22:37:04.322262    7569 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17822-2192/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1218 22:37:05.150710    7569 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 22:37:05.150841    7569 profile.go:148] Saving config to /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/download-only-196183/config.json ...
	I1218 22:37:05.151062    7569 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 22:37:05.151270    7569 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17822-2192/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-196183"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-196183 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-196183 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.393072653s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
--- PASS: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-196183
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-196183: exit status 85 (82.30519ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-196183 | jenkins | v1.32.0 | 18 Dec 23 22:36 UTC |          |
	|         | -p download-only-196183           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-196183 | jenkins | v1.32.0 | 18 Dec 23 22:36 UTC |          |
	|         | -p download-only-196183           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-196183 | jenkins | v1.32.0 | 18 Dec 23 22:37 UTC |          |
	|         | -p download-only-196183           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 22:37:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 22:37:08.493472    7643 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:37:08.493612    7643 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:37:08.493622    7643 out.go:309] Setting ErrFile to fd 2...
	I1218 22:37:08.493628    7643 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:37:08.493882    7643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	W1218 22:37:08.494025    7643 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17822-2192/.minikube/config/config.json: open /home/jenkins/minikube-integration/17822-2192/.minikube/config/config.json: no such file or directory
	I1218 22:37:08.494255    7643 out.go:303] Setting JSON to true
	I1218 22:37:08.494955    7643 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1178,"bootTime":1702937851,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1218 22:37:08.495020    7643 start.go:138] virtualization:  
	I1218 22:37:08.497503    7643 out.go:97] [download-only-196183] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 22:37:08.499529    7643 out.go:169] MINIKUBE_LOCATION=17822
	I1218 22:37:08.497850    7643 notify.go:220] Checking for updates...
	I1218 22:37:08.503572    7643 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:37:08.505880    7643 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:37:08.507769    7643 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	I1218 22:37:08.509790    7643 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-196183"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-196183
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-625373 --alsologtostderr --binary-mirror http://127.0.0.1:35709 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-625373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-625373
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (58.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-946646 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-946646 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (56.022181554s)
helpers_test.go:175: Cleaning up "offline-docker-946646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-946646
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-946646: (2.175836412s)
--- PASS: TestOffline (58.20s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-277112
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-277112: exit status 85 (112.856297ms)

                                                
                                                
-- stdout --
	* Profile "addons-277112" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-277112"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-277112
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-277112: exit status 85 (112.446767ms)

                                                
                                                
-- stdout --
	* Profile "addons-277112" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-277112"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/Setup (142.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-277112 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-277112 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m22.830674085s)
--- PASS: TestAddons/Setup (142.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 34.960902ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2jt27" [fde55396-40c0-4d55-b4b6-aea03fafe5c2] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004309052s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xqcw7" [2e273129-0475-446b-8f43-0a9765d21350] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004917106s
addons_test.go:339: (dbg) Run:  kubectl --context addons-277112 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-277112 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-277112 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.04644836s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 ip
2023/12/18 22:39:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pgfpz" [32a84c42-4490-4fb1-9c14-29e27be67e32] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005566585s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-277112
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-277112: (5.782933415s)
--- PASS: TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 5.080821ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-2sqdc" [7804eeb4-a7ff-4d3b-926f-08929bbf85cd] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004227952s
addons_test.go:414: (dbg) Run:  kubectl --context addons-277112 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 36.113644ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-277112 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-277112 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [63ca5686-02b4-4e0d-8791-808f39c11075] Pending
helpers_test.go:344: "task-pv-pod" [63ca5686-02b4-4e0d-8791-808f39c11075] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [63ca5686-02b4-4e0d-8791-808f39c11075] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004149693s
addons_test.go:583: (dbg) Run:  kubectl --context addons-277112 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-277112 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-277112 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-277112 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-277112 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-277112 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-277112 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [07e15405-6044-4313-bc3b-f7286190fa5f] Pending
helpers_test.go:344: "task-pv-pod-restore" [07e15405-6044-4313-bc3b-f7286190fa5f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [07e15405-6044-4313-bc3b-f7286190fa5f] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003672481s
addons_test.go:625: (dbg) Run:  kubectl --context addons-277112 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-277112 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-277112 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-277112 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.880857177s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-277112 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-277112 --alsologtostderr -v=1: (1.423130977s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-hdblv" [5dc6972e-223b-4295-bd2c-cb9f04ae9a0d] Pending
helpers_test.go:344: "headlamp-777fd4b855-hdblv" [5dc6972e-223b-4295-bd2c-cb9f04ae9a0d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-hdblv" [5dc6972e-223b-4295-bd2c-cb9f04ae9a0d] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003767098s
--- PASS: TestAddons/parallel/Headlamp (10.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-jkbsv" [559d2d73-32c7-49b4-ac77-b6f65d84202e] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004211976s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-277112
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-277112 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-277112 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-277112 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a5de2781-1999-42cc-9487-0c355174e36a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a5de2781-1999-42cc-9487-0c355174e36a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a5de2781-1999-42cc-9487-0c355174e36a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004378513s
addons_test.go:890: (dbg) Run:  kubectl --context addons-277112 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 ssh "cat /opt/local-path-provisioner/pvc-2c036aaf-9188-4922-a1b6-850c21e22b1b_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-277112 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-277112 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-277112 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-277112 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.125630782s)
--- PASS: TestAddons/parallel/LocalPath (53.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pw89s" [59ffba20-0b33-407f-bda7-71147794901d] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003790657s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-277112
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-277112 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-277112 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-277112
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-277112: (10.79823818s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-277112
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-277112
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-277112
--- PASS: TestAddons/StoppedEnableDisable (11.10s)

                                                
                                    
x
+
TestCertOptions (35.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-440976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E1218 23:20:35.486389    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-440976 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (32.319773788s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-440976 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-440976 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-440976 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-440976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-440976
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-440976: (2.1521924s)
--- PASS: TestCertOptions (35.19s)

                                                
                                    
x
+
TestCertExpiration (252.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-018197 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1218 23:18:58.491543    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:19:10.766692    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:19:26.217718    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-018197 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (43.103404734s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-018197 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-018197 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (26.639681016s)
helpers_test.go:175: Cleaning up "cert-expiration-018197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-018197
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-018197: (2.344579226s)
--- PASS: TestCertExpiration (252.09s)

                                                
                                    
x
+
TestDockerFlags (33.98s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-454324 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1218 23:19:37.876626    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-454324 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.219253655s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-454324 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-454324 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-454324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-454324
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-454324: (2.085275741s)
--- PASS: TestDockerFlags (33.98s)

                                                
                                    
x
+
TestForceSystemdFlag (38.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-887846 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-887846 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.421417493s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-887846 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-887846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-887846
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-887846: (2.263110395s)
--- PASS: TestForceSystemdFlag (38.10s)

                                                
                                    
x
+
TestForceSystemdEnv (44.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-120974 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-120974 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.266017076s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-120974 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-120974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-120974
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-120974: (2.331364105s)
--- PASS: TestForceSystemdEnv (44.07s)

                                                
                                    
x
+
TestErrorSpam/setup (32.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-695916 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-695916 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-695916 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-695916 --driver=docker  --container-runtime=docker: (32.105794907s)
--- PASS: TestErrorSpam/setup (32.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (2.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 stop: (1.866347048s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-695916 --log_dir /tmp/nospam-695916 stop
--- PASS: TestErrorSpam/stop (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17822-2192/.minikube/files/etc/test/nested/copy/7489/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790753 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-790753 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m24.747131997s)
--- PASS: TestFunctional/serial/StartWithProxy (84.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790753 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-790753 --alsologtostderr -v=8: (36.631011121s)
functional_test.go:659: soft start took 36.633147285s for "functional-790753" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-790753 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 cache add registry.k8s.io/pause:3.1: (1.028144853s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 cache add registry.k8s.io/pause:3.3: (1.061982385s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-790753 /tmp/TestFunctionalserialCacheCmdcacheadd_local2943114066/001
E1218 22:44:37.879403    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:44:37.886773    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:44:37.897382    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:44:37.917606    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:44:37.957838    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:44:38.038106    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cache add minikube-local-cache-test:functional-790753
E1218 22:44:38.198479    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:44:38.519445    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cache delete minikube-local-cache-test:functional-790753
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-790753
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh sudo crictl images
E1218 22:44:39.159603    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790753 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (349.98438ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cache reload
E1218 22:44:40.439869    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 kubectl -- --context functional-790753 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-790753 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790753 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1218 22:44:43.000645    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:44:48.121477    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:44:58.361692    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:45:18.841828    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-790753 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.877579052s)
functional_test.go:757: restart took 39.877668358s for "functional-790753" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-790753 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 logs: (1.226792427s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 logs --file /tmp/TestFunctionalserialLogsFileCmd3268595089/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 logs --file /tmp/TestFunctionalserialLogsFileCmd3268595089/001/logs.txt: (1.233851038s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-790753 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-790753
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-790753: exit status 115 (559.782624ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30310 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-790753 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790753 config get cpus: exit status 14 (119.948162ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790753 config get cpus: exit status 14 (97.420436ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-790753 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-790753 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48225: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790753 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-790753 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (293.32126ms)

                                                
                                                
-- stdout --
	* [functional-790753] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 22:46:17.062846   47572 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:46:17.063168   47572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:46:17.063198   47572 out.go:309] Setting ErrFile to fd 2...
	I1218 22:46:17.063219   47572 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:46:17.063507   47572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	I1218 22:46:17.063946   47572 out.go:303] Setting JSON to false
	I1218 22:46:17.065128   47572 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1726,"bootTime":1702937851,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1218 22:46:17.065223   47572 start.go:138] virtualization:  
	I1218 22:46:17.075592   47572 out.go:177] * [functional-790753] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1218 22:46:17.078008   47572 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 22:46:17.078091   47572 notify.go:220] Checking for updates...
	I1218 22:46:17.081222   47572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:46:17.083647   47572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:46:17.086055   47572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	I1218 22:46:17.088140   47572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 22:46:17.091243   47572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 22:46:17.094304   47572 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 22:46:17.094802   47572 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:46:17.129785   47572 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 22:46:17.129918   47572 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:46:17.258662   47572 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-18 22:46:17.244111982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:46:17.258760   47572 docker.go:295] overlay module found
	I1218 22:46:17.262041   47572 out.go:177] * Using the docker driver based on existing profile
	I1218 22:46:17.266072   47572 start.go:298] selected driver: docker
	I1218 22:46:17.266090   47572 start.go:902] validating driver "docker" against &{Name:functional-790753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-790753 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:46:17.266197   47572 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 22:46:17.270166   47572 out.go:177] 
	W1218 22:46:17.272634   47572 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1218 22:46:17.274612   47572 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790753 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-790753 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-790753 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (338.354456ms)

                                                
                                                
-- stdout --
	* [functional-790753] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 22:46:17.846081   47735 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:46:17.846418   47735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:46:17.846474   47735 out.go:309] Setting ErrFile to fd 2...
	I1218 22:46:17.846514   47735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:46:17.847485   47735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	I1218 22:46:17.847900   47735 out.go:303] Setting JSON to false
	I1218 22:46:17.849042   47735 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1727,"bootTime":1702937851,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1218 22:46:17.849143   47735 start.go:138] virtualization:  
	I1218 22:46:17.851796   47735 out.go:177] * [functional-790753] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1218 22:46:17.855415   47735 notify.go:220] Checking for updates...
	I1218 22:46:17.856673   47735 out.go:177]   - MINIKUBE_LOCATION=17822
	I1218 22:46:17.862844   47735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 22:46:17.864355   47735 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	I1218 22:46:17.866003   47735 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	I1218 22:46:17.867884   47735 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1218 22:46:17.869633   47735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 22:46:17.872034   47735 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 22:46:17.872671   47735 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 22:46:17.901501   47735 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1218 22:46:17.901621   47735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:46:18.015511   47735 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-18 22:46:18.006216612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:46:18.015613   47735 docker.go:295] overlay module found
	I1218 22:46:18.017643   47735 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1218 22:46:18.020929   47735 start.go:298] selected driver: docker
	I1218 22:46:18.020959   47735 start.go:902] validating driver "docker" against &{Name:functional-790753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-790753 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 22:46:18.021068   47735 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 22:46:18.032501   47735 out.go:177] 
	W1218 22:46:18.037010   47735 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 22:46:18.039333   47735 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-790753 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-790753 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-s96nj" [30343e0f-966e-42d1-a45f-aca222a5cfda] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-s96nj" [30343e0f-966e-42d1-a45f-aca222a5cfda] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005691208s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31992
functional_test.go:1674: http://192.168.49.2:31992: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-s96nj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31992
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [35d6687c-f25f-409a-aef3-c2efdd66efef] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004015193s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-790753 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-790753 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-790753 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-790753 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [333b7f73-2338-4f96-a007-da43aa2474fd] Pending
helpers_test.go:344: "sp-pod" [333b7f73-2338-4f96-a007-da43aa2474fd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [333b7f73-2338-4f96-a007-da43aa2474fd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.005425906s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-790753 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-790753 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-790753 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92348c18-6148-4e26-bf69-39d0ddd7827d] Pending
helpers_test.go:344: "sp-pod" [92348c18-6148-4e26-bf69-39d0ddd7827d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [92348c18-6148-4e26-bf69-39d0ddd7827d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003668249s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-790753 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh -n functional-790753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cp functional-790753:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd614297702/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh -n functional-790753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh -n functional-790753 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/7489/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo cat /etc/test/nested/copy/7489/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/7489.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo cat /etc/ssl/certs/7489.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/7489.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo cat /usr/share/ca-certificates/7489.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/74892.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo cat /etc/ssl/certs/74892.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/74892.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo cat /usr/share/ca-certificates/74892.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-790753 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790753 ssh "sudo systemctl is-active crio": exit status 1 (512.774586ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790753 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-790753
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-790753
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790753 image ls --format short --alsologtostderr:
I1218 22:46:25.019154   49190 out.go:296] Setting OutFile to fd 1 ...
I1218 22:46:25.019407   49190 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:25.019436   49190 out.go:309] Setting ErrFile to fd 2...
I1218 22:46:25.019478   49190 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:25.019839   49190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
I1218 22:46:25.020600   49190 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:25.020857   49190 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:25.021532   49190 cli_runner.go:164] Run: docker container inspect functional-790753 --format={{.State.Status}}
I1218 22:46:25.042283   49190 ssh_runner.go:195] Run: systemctl --version
I1218 22:46:25.042337   49190 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790753
I1218 22:46:25.061700   49190 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/functional-790753/id_rsa Username:docker}
I1218 22:46:25.166618   49190 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790753 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-790753 | 7ecd7c7d8afbe | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-790753 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/localhost/my-image                | functional-790753 | 5143c5ccb0345 | 1.41MB |
| docker.io/library/nginx                     | alpine            | f09fc93534f6a | 43.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/library/nginx                     | latest            | 5628e5ea3c17f | 192MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790753 image ls --format table --alsologtostderr:
I1218 22:46:28.807424   49557 out.go:296] Setting OutFile to fd 1 ...
I1218 22:46:28.807609   49557 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:28.807621   49557 out.go:309] Setting ErrFile to fd 2...
I1218 22:46:28.807627   49557 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:28.807875   49557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
I1218 22:46:28.808569   49557 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:28.808708   49557 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:28.809191   49557 cli_runner.go:164] Run: docker container inspect functional-790753 --format={{.State.Status}}
I1218 22:46:28.827017   49557 ssh_runner.go:195] Run: systemctl --version
I1218 22:46:28.827071   49557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790753
I1218 22:46:28.843668   49557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/functional-790753/id_rsa Username:docker}
I1218 22:46:28.942015   49557 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/12/18 22:46:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790753 image ls --format json --alsologtostderr:
[{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"5143c5ccb03451b1e26300cfa8c33e6ceba1c69593f3bfe40d3f170d9ac0ee23","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-790753"],"size":"1410000"},{"id":"f09fc93534f6a80e
1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"7ecd7c7d8afbe692dfdb34592d8c5990dd23ad63f3afed486d30bf4bed6496d8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-790753"],"size":"30"},{"id":"5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoT
ags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-790753"],"size":"32900000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metric
s-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790753 image ls --format json --alsologtostderr:
I1218 22:46:28.577461   49531 out.go:296] Setting OutFile to fd 1 ...
I1218 22:46:28.577616   49531 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:28.577625   49531 out.go:309] Setting ErrFile to fd 2...
I1218 22:46:28.577632   49531 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:28.577883   49531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
I1218 22:46:28.578506   49531 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:28.578639   49531 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:28.579263   49531 cli_runner.go:164] Run: docker container inspect functional-790753 --format={{.State.Status}}
I1218 22:46:28.597413   49531 ssh_runner.go:195] Run: systemctl --version
I1218 22:46:28.597463   49531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790753
I1218 22:46:28.614827   49531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/functional-790753/id_rsa Username:docker}
I1218 22:46:28.714001   49531 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-790753 image ls --format yaml --alsologtostderr:
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-790753
size: "32900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 7ecd7c7d8afbe692dfdb34592d8c5990dd23ad63f3afed486d30bf4bed6496d8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-790753
size: "30"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790753 image ls --format yaml --alsologtostderr:
I1218 22:46:25.331022   49221 out.go:296] Setting OutFile to fd 1 ...
I1218 22:46:25.331185   49221 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:25.331235   49221 out.go:309] Setting ErrFile to fd 2...
I1218 22:46:25.331253   49221 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:25.331519   49221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
I1218 22:46:25.332196   49221 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:25.332361   49221 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:25.332943   49221 cli_runner.go:164] Run: docker container inspect functional-790753 --format={{.State.Status}}
I1218 22:46:25.351428   49221 ssh_runner.go:195] Run: systemctl --version
I1218 22:46:25.351478   49221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790753
I1218 22:46:25.381936   49221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/functional-790753/id_rsa Username:docker}
I1218 22:46:25.482210   49221 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790753 ssh pgrep buildkitd: exit status 1 (391.127639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image build -t localhost/my-image:functional-790753 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 image build -t localhost/my-image:functional-790753 testdata/build --alsologtostderr: (2.336862814s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-790753 image build -t localhost/my-image:functional-790753 testdata/build --alsologtostderr:
I1218 22:46:25.992683   49297 out.go:296] Setting OutFile to fd 1 ...
I1218 22:46:25.992930   49297 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:25.992968   49297 out.go:309] Setting ErrFile to fd 2...
I1218 22:46:25.992989   49297 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 22:46:25.993272   49297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
I1218 22:46:25.993951   49297 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:25.995775   49297 config.go:182] Loaded profile config "functional-790753": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 22:46:25.996333   49297 cli_runner.go:164] Run: docker container inspect functional-790753 --format={{.State.Status}}
I1218 22:46:26.015579   49297 ssh_runner.go:195] Run: systemctl --version
I1218 22:46:26.015624   49297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-790753
I1218 22:46:26.043327   49297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/functional-790753/id_rsa Username:docker}
I1218 22:46:26.150471   49297 build_images.go:151] Building image from path: /tmp/build.1979849324.tar
I1218 22:46:26.150538   49297 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1218 22:46:26.165125   49297 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1979849324.tar
I1218 22:46:26.170410   49297 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1979849324.tar: stat -c "%s %y" /var/lib/minikube/build/build.1979849324.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1979849324.tar': No such file or directory
I1218 22:46:26.170435   49297 ssh_runner.go:362] scp /tmp/build.1979849324.tar --> /var/lib/minikube/build/build.1979849324.tar (3072 bytes)
I1218 22:46:26.205189   49297 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1979849324
I1218 22:46:26.215220   49297 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1979849324 -xf /var/lib/minikube/build/build.1979849324.tar
I1218 22:46:26.225399   49297 docker.go:346] Building image: /var/lib/minikube/build/build.1979849324
I1218 22:46:26.225470   49297 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-790753 /var/lib/minikube/build/build.1979849324
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.7s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:5143c5ccb03451b1e26300cfa8c33e6ceba1c69593f3bfe40d3f170d9ac0ee23 done
#8 naming to localhost/my-image:functional-790753 0.0s done
#8 DONE 0.1s
I1218 22:46:28.212156   49297 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-790753 /var/lib/minikube/build/build.1979849324: (1.986663577s)
I1218 22:46:28.212219   49297 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1979849324
I1218 22:46:28.228289   49297 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1979849324.tar
I1218 22:46:28.238094   49297 build_images.go:207] Built localhost/my-image:functional-790753 from /tmp/build.1979849324.tar
I1218 22:46:28.238124   49297 build_images.go:123] succeeded building to: functional-790753
I1218 22:46:28.238129   49297 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.537904635s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-790753
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-790753 docker-env) && out/minikube-linux-arm64 status -p functional-790753"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-790753 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image load --daemon gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 image load --daemon gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr: (3.998897211s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-790753 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-790753 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-g4pj6" [c01d8ae1-ea79-4f6d-b544-0dec37381da9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-g4pj6" [c01d8ae1-ea79-4f6d-b544-0dec37381da9] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003529404s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image load --daemon gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 image load --daemon gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr: (2.584584765s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.564283329s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-790753
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image load --daemon gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 image load --daemon gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr: (3.116554506s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image save gcr.io/google-containers/addon-resizer:functional-790753 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image rm gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.097803498s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-790753
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 image save --daemon gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-790753 image save --daemon gcr.io/google-containers/addon-resizer:functional-790753 --alsologtostderr: (1.200319993s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-790753
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 service list -o json
functional_test.go:1493: Took "441.784308ms" to run "out/minikube-linux-arm64 -p functional-790753 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30624
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30624
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-790753 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-790753 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-790753 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45140: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-790753 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-790753 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-790753 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8fb3ae54-c01d-4578-bbcc-0b27553573f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8fb3ae54-c01d-4578-bbcc-0b27553573f6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004468041s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-790753 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
E1218 22:45:59.802055    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.182.244 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-790753 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "362.685761ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "73.534483ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "345.555331ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "70.792043ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdany-port4190001970/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702939570720515316" to /tmp/TestFunctionalparallelMountCmdany-port4190001970/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702939570720515316" to /tmp/TestFunctionalparallelMountCmdany-port4190001970/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702939570720515316" to /tmp/TestFunctionalparallelMountCmdany-port4190001970/001/test-1702939570720515316
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (541.874952ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 18 22:46 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 18 22:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 18 22:46 test-1702939570720515316
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh cat /mount-9p/test-1702939570720515316
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-790753 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e9c00ee1-397a-4fd8-9748-ef5ca483947d] Pending
helpers_test.go:344: "busybox-mount" [e9c00ee1-397a-4fd8-9748-ef5ca483947d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e9c00ee1-397a-4fd8-9748-ef5ca483947d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e9c00ee1-397a-4fd8-9748-ef5ca483947d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004378107s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-790753 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdany-port4190001970/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdspecific-port3994013098/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdspecific-port3994013098/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790753 ssh "sudo umount -f /mount-9p": exit status 1 (454.237324ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-790753 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdspecific-port3994013098/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2476906995/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2476906995/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2476906995/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T" /mount1: exit status 1 (1.276061226s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-790753 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-790753 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2476906995/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2476906995/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-790753 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2476906995/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.87s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-790753
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-790753
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-790753
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-917364 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-917364 --driver=docker  --container-runtime=docker: (34.233499133s)
--- PASS: TestImageBuild/serial/Setup (34.23s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-917364
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-917364: (1.66625242s)
--- PASS: TestImageBuild/serial/NormalBuild (1.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-917364
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.88s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-917364
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-917364
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (104.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-319045 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1218 22:47:21.722775    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-319045 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m44.493834222s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (104.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-319045 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-319045 addons enable ingress --alsologtostderr -v=5: (10.441539793s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-319045 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-118233 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E1218 22:50:35.484632    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:35.489833    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:35.500011    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:35.520207    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:35.560425    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:35.640653    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:35.800986    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:36.121855    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:36.762721    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:38.043879    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:40.604905    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:45.726039    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:50:55.966773    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-118233 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (58.715137547s)
--- PASS: TestJSONOutput/start/Command (58.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-118233 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-118233 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-118233 --output=json --user=testUser
E1218 22:51:16.447429    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-118233 --output=json --user=testUser: (10.908405313s)
--- PASS: TestJSONOutput/stop/Command (10.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-052750 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-052750 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (104.422282ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"804ab90e-7a32-426e-90b8-f269a9f85aae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-052750] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"676bd34b-a503-47a0-976a-58a5ebe571d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17822"}}
	{"specversion":"1.0","id":"75aa336b-4c98-4c35-a15a-d127b2229c26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb852554-0c0d-4298-b5bd-cddbff538727","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig"}}
	{"specversion":"1.0","id":"3035ba2b-75ea-414c-be92-5fc85a3127b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube"}}
	{"specversion":"1.0","id":"f9274b0e-8c23-4dea-9ebc-bbf9727efa47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"214d35b6-0cff-4b64-926f-843b01321862","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"92167a49-adb7-4778-afab-34157b4f7d53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-052750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-052750
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.19s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-597351 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-597351 --network=: (34.003160281s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-597351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-597351
E1218 22:51:57.408520    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-597351: (2.157225903s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.19s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-291059 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-291059 --network=bridge: (36.282684518s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-291059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-291059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-291059: (2.02035504s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.32s)

                                                
                                    
x
+
TestKicExistingNetwork (34.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-887098 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-887098 --network=existing-network: (32.099864535s)
helpers_test.go:175: Cleaning up "existing-network-887098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-887098
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-887098: (2.030754037s)
--- PASS: TestKicExistingNetwork (34.29s)

                                                
                                    
x
+
TestKicCustomSubnet (34.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-887957 --subnet=192.168.60.0/24
E1218 22:53:19.328656    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-887957 --subnet=192.168.60.0/24: (32.22051208s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-887957 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-887957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-887957
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-887957: (2.170221077s)
--- PASS: TestKicCustomSubnet (34.41s)

                                                
                                    
x
+
TestKicStaticIP (33.88s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-880137 --static-ip=192.168.200.200
E1218 22:54:10.772313    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:10.777587    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:10.788585    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:10.808790    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:10.848981    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:10.929124    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:11.089601    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:11.410949    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:12.051786    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:13.331992    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:15.892806    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-880137 --static-ip=192.168.200.200: (31.629780949s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-880137 ip
helpers_test.go:175: Cleaning up "static-ip-880137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-880137
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-880137: (2.085729093s)
--- PASS: TestKicStaticIP (33.88s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.04s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-742167 --driver=docker  --container-runtime=docker
E1218 22:54:21.014364    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:31.254783    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:54:37.876467    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-742167 --driver=docker  --container-runtime=docker: (31.511218242s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-744932 --driver=docker  --container-runtime=docker
E1218 22:54:51.734893    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-744932 --driver=docker  --container-runtime=docker: (33.035920996s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-742167
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-744932
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-744932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-744932
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-744932: (2.130467807s)
helpers_test.go:175: Cleaning up "first-742167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-742167
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-742167: (2.070879474s)
--- PASS: TestMinikubeProfile (70.04s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-502892 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E1218 22:55:32.695085    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 22:55:35.484377    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-502892 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.976094359s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-502892 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-504689 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-504689 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.110109751s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-504689 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.51s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-502892 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-502892 --alsologtostderr -v=5: (1.510482012s)
--- PASS: TestMountStart/serial/DeleteFirst (1.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-504689 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-504689
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-504689: (1.211000554s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-504689
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-504689: (7.799490571s)
--- PASS: TestMountStart/serial/RestartStopped (8.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-504689 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-058139 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1218 22:56:03.169760    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 22:56:54.616123    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-058139 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.446825523s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (49.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-058139 -- rollout status deployment/busybox: (2.714353518s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-bbtvt -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-wggpn -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-bbtvt -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-wggpn -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-bbtvt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-wggpn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (49.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-bbtvt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-bbtvt -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-wggpn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-058139 -- exec busybox-5bc68d56bd-wggpn -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-058139 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-058139 -v 3 --alsologtostderr: (16.091139634s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-058139 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp testdata/cp-test.txt multinode-058139:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2660739288/001/cp-test_multinode-058139.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139:/home/docker/cp-test.txt multinode-058139-m02:/home/docker/cp-test_multinode-058139_multinode-058139-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m02 "sudo cat /home/docker/cp-test_multinode-058139_multinode-058139-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139:/home/docker/cp-test.txt multinode-058139-m03:/home/docker/cp-test_multinode-058139_multinode-058139-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m03 "sudo cat /home/docker/cp-test_multinode-058139_multinode-058139-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp testdata/cp-test.txt multinode-058139-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2660739288/001/cp-test_multinode-058139-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139-m02:/home/docker/cp-test.txt multinode-058139:/home/docker/cp-test_multinode-058139-m02_multinode-058139.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139 "sudo cat /home/docker/cp-test_multinode-058139-m02_multinode-058139.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139-m02:/home/docker/cp-test.txt multinode-058139-m03:/home/docker/cp-test_multinode-058139-m02_multinode-058139-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m03 "sudo cat /home/docker/cp-test_multinode-058139-m02_multinode-058139-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp testdata/cp-test.txt multinode-058139-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2660739288/001/cp-test_multinode-058139-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139-m03:/home/docker/cp-test.txt multinode-058139:/home/docker/cp-test_multinode-058139-m03_multinode-058139.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139 "sudo cat /home/docker/cp-test_multinode-058139-m03_multinode-058139.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 cp multinode-058139-m03:/home/docker/cp-test.txt multinode-058139-m02:/home/docker/cp-test_multinode-058139-m03_multinode-058139-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 ssh -n multinode-058139-m02 "sudo cat /home/docker/cp-test_multinode-058139-m03_multinode-058139-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-058139 node stop m03: (1.234818811s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-058139 status: exit status 7 (557.381572ms)

                                                
                                                
-- stdout --
	multinode-058139
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-058139-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-058139-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-058139 status --alsologtostderr: exit status 7 (628.504748ms)

                                                
                                                
-- stdout --
	multinode-058139
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-058139-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-058139-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 22:58:44.646072  113750 out.go:296] Setting OutFile to fd 1 ...
	I1218 22:58:44.646249  113750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:58:44.646260  113750 out.go:309] Setting ErrFile to fd 2...
	I1218 22:58:44.646267  113750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 22:58:44.646541  113750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	I1218 22:58:44.646749  113750 out.go:303] Setting JSON to false
	I1218 22:58:44.646838  113750 mustload.go:65] Loading cluster: multinode-058139
	I1218 22:58:44.646933  113750 notify.go:220] Checking for updates...
	I1218 22:58:44.647303  113750 config.go:182] Loaded profile config "multinode-058139": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 22:58:44.647321  113750 status.go:255] checking status of multinode-058139 ...
	I1218 22:58:44.648210  113750 cli_runner.go:164] Run: docker container inspect multinode-058139 --format={{.State.Status}}
	I1218 22:58:44.670118  113750 status.go:330] multinode-058139 host status = "Running" (err=<nil>)
	I1218 22:58:44.670139  113750 host.go:66] Checking if "multinode-058139" exists ...
	I1218 22:58:44.670425  113750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-058139
	I1218 22:58:44.690034  113750 host.go:66] Checking if "multinode-058139" exists ...
	I1218 22:58:44.690353  113750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 22:58:44.690404  113750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-058139
	I1218 22:58:44.720289  113750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/multinode-058139/id_rsa Username:docker}
	I1218 22:58:44.822917  113750 ssh_runner.go:195] Run: systemctl --version
	I1218 22:58:44.829184  113750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 22:58:44.842862  113750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1218 22:58:44.913726  113750 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-18 22:58:44.904671626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1218 22:58:44.914302  113750 kubeconfig.go:92] found "multinode-058139" server: "https://192.168.58.2:8443"
	I1218 22:58:44.914325  113750 api_server.go:166] Checking apiserver status ...
	I1218 22:58:44.914369  113750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 22:58:44.927629  113750 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	I1218 22:58:44.937988  113750 api_server.go:182] apiserver freezer: "8:freezer:/docker/b3dfeb4bdc5db3a4978df4cbbe73df78093ebc3532323d14fc72de8c4dc2b834/kubepods/burstable/pod79ca6c35360ae65caccac3e143db7521/48002bacc9d988cd01da786fcab7580a3abdc7497d5cd1c8e5981b64acdb0cc1"
	I1218 22:58:44.938053  113750 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b3dfeb4bdc5db3a4978df4cbbe73df78093ebc3532323d14fc72de8c4dc2b834/kubepods/burstable/pod79ca6c35360ae65caccac3e143db7521/48002bacc9d988cd01da786fcab7580a3abdc7497d5cd1c8e5981b64acdb0cc1/freezer.state
	I1218 22:58:44.948287  113750 api_server.go:204] freezer state: "THAWED"
	I1218 22:58:44.948358  113750 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1218 22:58:44.963589  113750 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1218 22:58:44.963666  113750 status.go:421] multinode-058139 apiserver status = Running (err=<nil>)
	I1218 22:58:44.963718  113750 status.go:257] multinode-058139 status: &{Name:multinode-058139 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 22:58:44.963763  113750 status.go:255] checking status of multinode-058139-m02 ...
	I1218 22:58:44.964120  113750 cli_runner.go:164] Run: docker container inspect multinode-058139-m02 --format={{.State.Status}}
	I1218 22:58:44.989491  113750 status.go:330] multinode-058139-m02 host status = "Running" (err=<nil>)
	I1218 22:58:44.989511  113750 host.go:66] Checking if "multinode-058139-m02" exists ...
	I1218 22:58:44.989909  113750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-058139-m02
	I1218 22:58:45.010062  113750 host.go:66] Checking if "multinode-058139-m02" exists ...
	I1218 22:58:45.010397  113750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 22:58:45.010439  113750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-058139-m02
	I1218 22:58:45.045211  113750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/17822-2192/.minikube/machines/multinode-058139-m02/id_rsa Username:docker}
	I1218 22:58:45.151402  113750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 22:58:45.166814  113750 status.go:257] multinode-058139-m02 status: &{Name:multinode-058139-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1218 22:58:45.166846  113750 status.go:255] checking status of multinode-058139-m03 ...
	I1218 22:58:45.167178  113750 cli_runner.go:164] Run: docker container inspect multinode-058139-m03 --format={{.State.Status}}
	I1218 22:58:45.188175  113750 status.go:330] multinode-058139-m03 host status = "Stopped" (err=<nil>)
	I1218 22:58:45.188202  113750 status.go:343] host is not running, skipping remaining checks
	I1218 22:58:45.188210  113750 status.go:257] multinode-058139-m03 status: &{Name:multinode-058139-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-058139 node start m03 --alsologtostderr: (12.977756701s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-058139
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-058139
E1218 22:59:10.767622    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-058139: (22.696255813s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-058139 --wait=true -v=8 --alsologtostderr
E1218 22:59:37.876143    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 22:59:38.456808    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:00:35.484300    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 23:01:00.924267    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-058139 --wait=true -v=8 --alsologtostderr: (1m40.359111965s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-058139
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-058139 node delete m03: (4.361644609s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-058139 stop: (21.511393209s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-058139 status: exit status 7 (101.833194ms)

                                                
                                                
-- stdout --
	multinode-058139
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-058139-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-058139 status --alsologtostderr: exit status 7 (105.269754ms)

                                                
                                                
-- stdout --
	multinode-058139
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-058139-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 23:01:28.991784  129830 out.go:296] Setting OutFile to fd 1 ...
	I1218 23:01:28.991991  129830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:01:28.992020  129830 out.go:309] Setting ErrFile to fd 2...
	I1218 23:01:28.992042  129830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 23:01:28.992316  129830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17822-2192/.minikube/bin
	I1218 23:01:28.992506  129830 out.go:303] Setting JSON to false
	I1218 23:01:28.992628  129830 mustload.go:65] Loading cluster: multinode-058139
	I1218 23:01:28.992709  129830 notify.go:220] Checking for updates...
	I1218 23:01:28.993085  129830 config.go:182] Loaded profile config "multinode-058139": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 23:01:28.993123  129830 status.go:255] checking status of multinode-058139 ...
	I1218 23:01:28.993724  129830 cli_runner.go:164] Run: docker container inspect multinode-058139 --format={{.State.Status}}
	I1218 23:01:29.013109  129830 status.go:330] multinode-058139 host status = "Stopped" (err=<nil>)
	I1218 23:01:29.013137  129830 status.go:343] host is not running, skipping remaining checks
	I1218 23:01:29.013144  129830 status.go:257] multinode-058139 status: &{Name:multinode-058139 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 23:01:29.013168  129830 status.go:255] checking status of multinode-058139-m02 ...
	I1218 23:01:29.013451  129830 cli_runner.go:164] Run: docker container inspect multinode-058139-m02 --format={{.State.Status}}
	I1218 23:01:29.029631  129830 status.go:330] multinode-058139-m02 host status = "Stopped" (err=<nil>)
	I1218 23:01:29.029649  129830 status.go:343] host is not running, skipping remaining checks
	I1218 23:01:29.029656  129830 status.go:257] multinode-058139-m02 status: &{Name:multinode-058139-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-058139 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-058139 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.660277811s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-058139 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-058139
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-058139-m02 --driver=docker  --container-runtime=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-058139-m02 --driver=docker  --container-runtime=docker: exit status 14 (118.712705ms)

                                                
                                                
-- stdout --
	* [multinode-058139-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-058139-m02' is duplicated with machine name 'multinode-058139-m02' in profile 'multinode-058139'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-058139-m03 --driver=docker  --container-runtime=docker
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-058139-m03 --driver=docker  --container-runtime=docker: (32.575885342s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-058139
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-058139: exit status 80 (332.964885ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-058139
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-058139-m03 already exists in multinode-058139-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-058139-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-058139-m03: (1.896600326s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.00s)

                                                
                                    
x
+
TestPreload (128.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-428253 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E1218 23:04:10.768815    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-428253 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m1.097933432s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-428253 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-428253 image pull gcr.io/k8s-minikube/busybox: (1.402416562s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-428253
E1218 23:04:37.876470    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-428253: (10.889791029s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-428253 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1218 23:05:35.484566    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-428253 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (53.023343112s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-428253 image list
helpers_test.go:175: Cleaning up "test-preload-428253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-428253
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-428253: (2.120013826s)
--- PASS: TestPreload (128.77s)

                                                
                                    
x
+
TestScheduledStopUnix (106.67s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-482401 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-482401 --memory=2048 --driver=docker  --container-runtime=docker: (33.244184135s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-482401 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-482401 -n scheduled-stop-482401
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-482401 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-482401 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-482401 -n scheduled-stop-482401
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-482401
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-482401 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1218 23:06:58.529960    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-482401
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-482401: exit status 7 (85.048915ms)

                                                
                                                
-- stdout --
	scheduled-stop-482401
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-482401 -n scheduled-stop-482401
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-482401 -n scheduled-stop-482401: exit status 7 (83.783179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-482401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-482401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-482401: (1.666909588s)
--- PASS: TestScheduledStopUnix (106.67s)

                                                
                                    
x
+
TestSkaffold (104.94s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe84454278 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-545039 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-545039 --memory=2600 --driver=docker  --container-runtime=docker: (31.402968355s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe84454278 run --minikube-profile skaffold-545039 --kube-context skaffold-545039 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe84454278 run --minikube-profile skaffold-545039 --kube-context skaffold-545039 --status-check=true --port-forward=false --interactive=false: (57.605597103s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5d45f489c-gql99" [ce6e7263-b3c4-4039-ad3a-3c7a38886d5a] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003346206s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-775fbb5869-jxnf7" [efa1c8aa-d5d1-429f-a214-09affb72dd12] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003157304s
helpers_test.go:175: Cleaning up "skaffold-545039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-545039
E1218 23:09:10.767250    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-545039: (2.88184974s)
--- PASS: TestSkaffold (104.94s)

                                                
                                    
x
+
TestInsufficientStorage (14.33s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-129048 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-129048 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.988107047s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f4180b7e-86c9-4211-a8f2-6c076388847e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-129048] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3f9b1ba-23c1-4557-b60e-e0f1d695b43b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17822"}}
	{"specversion":"1.0","id":"59ef54f5-612b-4c4f-872e-8472e8f6152b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fbdf11c3-5969-447f-b460-a0ae0dd3887c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig"}}
	{"specversion":"1.0","id":"1b67bee3-893b-466e-a494-bb6045822583","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube"}}
	{"specversion":"1.0","id":"ffb50511-d6d8-413d-ab43-d58ded285db9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5458310c-c113-48f8-8a9d-b3c0bb90a56e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9f8a86e5-71bc-4040-ab10-e10050f42eaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7dcc2888-6390-4437-a009-16d860ec707c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8060ad86-75fb-4c61-87a2-a72a779c969a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b558672d-fdae-4c2a-af67-41b288531fbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0cec4237-fe42-47fe-9103-9e51f07d6175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-129048 in cluster insufficient-storage-129048","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"30603de5-fecc-417e-9a05-4f4e5fabbb74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1702920864-17822 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe1d69db-234a-4f79-974f-edd3fdb58a29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f61afac-4982-47a5-a87f-a51518820c5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-129048 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-129048 --output=json --layout=cluster: exit status 7 (322.002277ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-129048","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-129048","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 23:09:24.688731  165160 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-129048" does not appear in /home/jenkins/minikube-integration/17822-2192/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-129048 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-129048 --output=json --layout=cluster: exit status 7 (316.689485ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-129048","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-129048","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1218 23:09:25.005860  165213 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-129048" does not appear in /home/jenkins/minikube-integration/17822-2192/kubeconfig
	E1218 23:09:25.018017  165213 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/insufficient-storage-129048/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-129048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-129048
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-129048: (1.702584418s)
--- PASS: TestInsufficientStorage (14.33s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (91.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.663993772.exe start -p running-upgrade-753715 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1218 23:15:20.456128    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:15:35.483920    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.663993772.exe start -p running-upgrade-753715 --memory=2200 --vm-driver=docker  --container-runtime=docker: (56.114319099s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-753715 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1218 23:16:42.376666    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-753715 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.004843268s)
helpers_test.go:175: Cleaning up "running-upgrade-753715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-753715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-753715: (2.356947494s)
--- PASS: TestRunningBinaryUpgrade (91.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (421.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-760512 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-760512 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m6.271880861s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-760512
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-760512: (10.909977389s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-760512 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-760512 status --format={{.Host}}: exit status 7 (103.467474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-760512 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-760512 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m57.611752979s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-760512 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-760512 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-760512 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (118.480194ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-760512] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-760512
	    minikube start -p kubernetes-upgrade-760512 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7605122 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-760512 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-760512 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-760512 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.685434245s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-760512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-760512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-760512: (2.461611844s)
--- PASS: TestKubernetesUpgrade (421.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (199.35s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2017018257.exe start -p missing-upgrade-213106 --memory=2200 --driver=docker  --container-runtime=docker
E1218 23:10:33.817862    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:10:35.483916    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.2017018257.exe start -p missing-upgrade-213106 --memory=2200 --driver=docker  --container-runtime=docker: (1m54.766770986s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-213106
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-213106: (10.413184699s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-213106
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-213106 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-213106 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m10.774762182s)
helpers_test.go:175: Cleaning up "missing-upgrade-213106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-213106
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-213106: (2.170979071s)
--- PASS: TestMissingContainerUpgrade (199.35s)

                                                
                                    
x
+
TestPause/serial/Start (96.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-989247 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1218 23:09:37.877698    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-989247 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m36.478316132s)
--- PASS: TestPause/serial/Start (96.48s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-989247 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-989247 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.663484822s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.70s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-989247 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-989247 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-989247 --output=json --layout=cluster: exit status 2 (432.061487ms)

                                                
                                                
-- stdout --
	{"Name":"pause-989247","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-989247","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-989247 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.14s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-989247 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-989247 --alsologtostderr -v=5: (1.136698719s)
--- PASS: TestPause/serial/PauseAgain (1.14s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-989247 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-989247 --alsologtostderr -v=5: (2.161645309s)
--- PASS: TestPause/serial/DeletePaused (2.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-989247
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-989247: exit status 1 (17.703127ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-989247: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (88.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.566980737.exe start -p stopped-upgrade-627432 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1218 23:13:58.496230    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:13:58.501523    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:13:58.511746    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:13:58.531972    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:13:58.572194    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:13:58.652463    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:13:58.813423    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:13:59.138317    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:13:59.778978    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:14:01.059863    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:14:03.653351    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:14:08.773516    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:14:10.766999    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:14:19.014531    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:14:37.876488    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.566980737.exe start -p stopped-upgrade-627432 --memory=2200 --vm-driver=docker  --container-runtime=docker: (53.586244593s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.566980737.exe -p stopped-upgrade-627432 stop
E1218 23:14:39.495558    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.566980737.exe -p stopped-upgrade-627432 stop: (1.31223217s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-627432 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-627432 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.717192893s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (88.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-627432
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-627432: (1.822258069s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608710 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-608710 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (86.882762ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-608710] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17822
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17822-2192/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17822-2192/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608710 --driver=docker  --container-runtime=docker
E1218 23:17:40.924809    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608710 --driver=docker  --container-runtime=docker: (40.510963484s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-608710 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608710 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608710 --no-kubernetes --driver=docker  --container-runtime=docker: (15.550274311s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-608710 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-608710 status -o json: exit status 2 (557.897327ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-608710","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-608710
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-608710: (1.987275155s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608710 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608710 --no-kubernetes --driver=docker  --container-runtime=docker: (11.506413463s)
--- PASS: TestNoKubernetes/serial/Start (11.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-608710 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-608710 "sudo systemctl is-active --quiet service kubelet": exit status 1 (495.157089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (4.306318338s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-608710
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-608710: (1.36231543s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-608710 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-608710 --driver=docker  --container-runtime=docker: (8.278770838s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-608710 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-608710 "sudo systemctl is-active --quiet service kubelet": exit status 1 (421.605483ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (131.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-809833 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-809833 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m11.393753095s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (131.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-809833 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [600a10f4-9e7b-4121-a51e-1e7541c0d6b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [600a10f4-9e7b-4121-a51e-1e7541c0d6b8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003310767s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-809833 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-809833 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-809833 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.403987942s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-809833 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-809833 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-809833 --alsologtostderr -v=3: (11.67167329s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-122607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-122607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (1m6.361533682s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-809833 -n old-k8s-version-809833
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-809833 -n old-k8s-version-809833: exit status 7 (108.2946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-809833 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (450.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-809833 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1218 23:23:38.530910    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 23:23:58.492207    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:24:10.766917    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-809833 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (7m29.824336916s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-809833 -n old-k8s-version-809833
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (450.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-122607 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [303a3a1f-1cb0-4857-82b5-ff9fa147e76e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [303a3a1f-1cb0-4857-82b5-ff9fa147e76e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004249659s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-122607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-122607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-122607 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-122607 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-122607 --alsologtostderr -v=3: (10.945218722s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-122607 -n no-preload-122607
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-122607 -n no-preload-122607: exit status 7 (88.740541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-122607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-122607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E1218 23:24:37.876460    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 23:25:35.484066    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 23:27:13.818069    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:28:58.492223    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:29:10.767194    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:29:37.876011    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 23:30:21.578363    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-122607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (5m49.323565492s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-122607 -n no-preload-122607
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqwst" [13f892fe-2e68-4c15-8f04-8d0156cb954e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqwst" [13f892fe-2e68-4c15-8f04-8d0156cb954e] Running
E1218 23:30:35.484156    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.00348051s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqwst" [13f892fe-2e68-4c15-8f04-8d0156cb954e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00507583s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-122607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-122607 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-122607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-122607 -n no-preload-122607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-122607 -n no-preload-122607: exit status 2 (354.552793ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-122607 -n no-preload-122607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-122607 -n no-preload-122607: exit status 2 (370.076266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-122607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-122607 -n no-preload-122607
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-122607 -n no-preload-122607
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rvpt8" [4103cb18-ecfe-4ead-acbd-2fe24ca5a637] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004087207s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-465158 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-465158 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (1m35.522477087s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rvpt8" [4103cb18-ecfe-4ead-acbd-2fe24ca5a637] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003817141s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-809833 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-809833 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-809833 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-809833 -n old-k8s-version-809833
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-809833 -n old-k8s-version-809833: exit status 2 (461.717207ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-809833 -n old-k8s-version-809833
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-809833 -n old-k8s-version-809833: exit status 2 (405.711159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-809833 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-809833 -n old-k8s-version-809833
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-809833 -n old-k8s-version-809833
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-830445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-830445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (1m34.845741936s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-465158 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [10b2aeb4-e0fb-4ea1-b30a-004695ef61c8] Pending
helpers_test.go:344: "busybox" [10b2aeb4-e0fb-4ea1-b30a-004695ef61c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [10b2aeb4-e0fb-4ea1-b30a-004695ef61c8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004256818s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-465158 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-465158 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-465158 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.08119036s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-465158 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-465158 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-465158 --alsologtostderr -v=3: (11.027240263s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-830445 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fc610ff0-a35d-4f65-a366-4ae0fc37a6f7] Pending
helpers_test.go:344: "busybox" [fc610ff0-a35d-4f65-a366-4ae0fc37a6f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fc610ff0-a35d-4f65-a366-4ae0fc37a6f7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004262351s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-830445 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-465158 -n embed-certs-465158
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-465158 -n embed-certs-465158: exit status 7 (85.308906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-465158 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (321.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-465158 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-465158 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m20.619022178s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-465158 -n embed-certs-465158
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (321.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-830445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-830445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.28896764s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-830445 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-830445 --alsologtostderr -v=3
E1218 23:32:57.157905    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:57.163160    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:57.173359    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:57.193488    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:57.235180    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:57.315273    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:57.476113    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:57.796617    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:58.437323    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:32:59.718088    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:33:02.278805    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-830445 --alsologtostderr -v=3: (11.23651935s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445: exit status 7 (87.691701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-830445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (357.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-830445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1218 23:33:07.399744    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:33:17.640443    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:33:38.121060    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:33:58.491765    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:34:10.767675    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:34:14.507936    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:14.513154    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:14.523365    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:14.543623    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:14.583940    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:14.664207    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:14.824612    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:15.145170    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:15.785361    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:17.066064    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:19.081268    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:34:19.626598    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:20.925802    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 23:34:24.747358    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:34.988345    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:34:37.876775    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
E1218 23:34:55.469428    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:35:35.484749    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
E1218 23:35:36.430455    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:35:41.002381    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:36:58.351575    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:37:57.157024    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-830445 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m56.239098254s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (357.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4wpnb" [9435f92b-fef2-41cc-b43b-f7c118ceb248] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4wpnb" [9435f92b-fef2-41cc-b43b-f7c118ceb248] Running
E1218 23:38:24.843496    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.004508718s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4wpnb" [9435f92b-fef2-41cc-b43b-f7c118ceb248] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004538992s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-465158 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-465158 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-465158 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-465158 --alsologtostderr -v=1: (1.007130007s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-465158 -n embed-certs-465158
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-465158 -n embed-certs-465158: exit status 2 (626.525435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-465158 -n embed-certs-465158
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-465158 -n embed-certs-465158: exit status 2 (578.331886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-465158 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-465158 --alsologtostderr -v=1: (1.027960773s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-465158 -n embed-certs-465158
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-465158 -n embed-certs-465158
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-162558 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E1218 23:38:58.491806    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-162558 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (1m0.052427014s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f7b9" [c0151360-8768-47da-a8a1-ec6ebd29c40b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1218 23:39:10.767724    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f7b9" [c0151360-8768-47da-a8a1-ec6ebd29c40b] Running
E1218 23:39:14.507250    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004595277s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f7b9" [c0151360-8768-47da-a8a1-ec6ebd29c40b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004484693s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-830445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-830445 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-830445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-830445 --alsologtostderr -v=1: (1.020322052s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445: exit status 2 (404.248828ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445: exit status 2 (460.504223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-830445 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-830445 -n default-k8s-diff-port-830445
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1218 23:39:37.876209    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m30.86773738s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-162558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-162558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.464753747s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-162558 --alsologtostderr -v=3
E1218 23:39:42.191929    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-162558 --alsologtostderr -v=3: (11.186932705s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-162558 -n newest-cni-162558
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-162558 -n newest-cni-162558: exit status 7 (100.9863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-162558 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-162558 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E1218 23:40:18.531890    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-162558 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (35.219673808s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-162558 -n newest-cni-162558
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-162558 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-162558 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-162558 -n newest-cni-162558
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-162558 -n newest-cni-162558: exit status 2 (356.806208ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-162558 -n newest-cni-162558
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-162558 -n newest-cni-162558: exit status 2 (392.742107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-162558 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-162558 -n newest-cni-162558
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-162558 -n newest-cni-162558
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.15s)
E1218 23:48:36.384485    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:36.389876    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:36.400095    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:36.420338    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:36.460638    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:36.540940    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:36.701182    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:37.021557    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:37.661925    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:38.942376    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:41.502495    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:44.195765    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/auto-335559/client.crt: no such file or directory
E1218 23:48:46.623008    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:47.291763    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:56.863531    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:48:58.492209    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:49:10.767177    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:49:14.507468    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/no-preload-122607/client.crt: no such file or directory
E1218 23:49:17.344114    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/custom-flannel-335559/client.crt: no such file or directory
E1218 23:49:20.204158    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:49:21.461676    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E1218 23:40:35.484429    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m2.982302003s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x9nb4" [688c9408-1d91-4fd6-8f36-e6e1ef78f611] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x9nb4" [688c9408-1d91-4fd6-8f36-e6e1ef78f611] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.003226143s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6g2m6" [992b98f7-5e1e-421d-8141-5888ed425cda] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007286236s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m26.591814259s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nfhk9" [01ad1edb-5e27-47ca-b190-0d2bffee3255] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nfhk9" [01ad1edb-5e27-47ca-b190-0d2bffee3255] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004338611s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1218 23:42:45.068489    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:45.073823    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:45.083996    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:45.104227    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:45.144477    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:45.225063    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:45.385645    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:45.706522    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:46.346659    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:47.627800    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:50.188432    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:55.309179    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:42:57.157091    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
E1218 23:43:05.549772    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m12.954336757s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-566qt" [93b1b27d-8346-42bf-8968-70dc05492989] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005594361s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zvrrq" [0ceefa2b-fcc7-4bd3-b02d-da08da44d2f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zvrrq" [0ceefa2b-fcc7-4bd3-b02d-da08da44d2f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004445792s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cspt5" [23643e39-5e16-45cc-809d-6d346159717c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cspt5" [23643e39-5e16-45cc-809d-6d346159717c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004125533s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E1218 23:43:53.818730    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
E1218 23:43:58.491838    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
E1218 23:44:06.990986    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
E1218 23:44:10.767973    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/ingress-addon-legacy-319045/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m34.303745721s)
--- PASS: TestNetworkPlugins/group/false/Start (94.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1218 23:44:37.875982    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/addons-277112/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m30.667717547s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nq45c" [a514ff54-3f34-4224-ae78-f7513d436d70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 23:45:28.911714    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/default-k8s-diff-port-830445/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nq45c" [a514ff54-3f34-4224-ae78-f7513d436d70] Running
E1218 23:45:35.484345    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/functional-790753/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003995678s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9tz9l" [23e9a0fd-d3ba-4cb7-ad9c-cd71122128c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9tz9l" [23e9a0fd-d3ba-4cb7-ad9c-cd71122128c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.007468826s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m11.339550162s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1218 23:46:00.337494    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/auto-335559/client.crt: no such file or directory
E1218 23:46:00.342638    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/auto-335559/client.crt: no such file or directory
E1218 23:46:00.353332    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/auto-335559/client.crt: no such file or directory
E1218 23:46:00.381726    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/auto-335559/client.crt: no such file or directory
E1218 23:46:00.425395    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/auto-335559/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (92.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1218 23:46:37.619349    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:37.624637    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:37.634874    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:37.655055    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:37.695310    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:37.775647    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:37.936174    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:38.256737    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:38.897466    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:40.178324    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:41.314439    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/auto-335559/client.crt: no such file or directory
E1218 23:46:42.738977    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:47.859113    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:46:58.099745    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:47:01.579241    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/skaffold-545039/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m32.302935232s)
--- PASS: TestNetworkPlugins/group/bridge/Start (92.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f5vm9" [9f293943-e127-4bc1-86a5-ae755009810a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004268631s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c9dkz" [50000f7c-aea3-4da2-ae5d-d5f80a1c51cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 23:47:18.580314    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
E1218 23:47:22.275472    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/auto-335559/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-c9dkz" [50000f7c-aea3-4da2-ae5d-d5f80a1c51cc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00334944s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (90.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1218 23:47:57.156283    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/old-k8s-version-809833/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-335559 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m30.173892093s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (90.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pqhn4" [29d93509-bd0f-47c3-a71c-86e81bcf5326] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 23:47:59.540517    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/kindnet-335559/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-pqhn4" [29d93509-bd0f-47c3-a71c-86e81bcf5326] Running
E1218 23:48:06.328653    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:06.334698    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:06.345768    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:06.366785    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:06.407179    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:06.487727    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:06.648046    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:06.968598    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:07.609419    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
E1218 23:48:08.890381    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005084207s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-335559 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-335559 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w28wm" [504a519a-2601-4d37-8dd1-fd5ad9aa2d32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w28wm" [504a519a-2601-4d37-8dd1-fd5ad9aa2d32] Running
E1218 23:49:28.252608    7489 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17822-2192/.minikube/profiles/calico-335559/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003394551s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-335559 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-335559 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    

Test skip (26/330)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
aaa_download_only_test.go:102: No preload image
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-633534 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-633534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-633534
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-615369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-615369
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-335559 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-335559" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-335559

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-335559" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335559"

                                                
                                                
----------------------- debugLogs end: cilium-335559 [took: 4.649042936s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-335559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-335559
--- SKIP: TestNetworkPlugins/group/cilium (4.89s)

                                                
                                    
Copied to clipboard