Test Report: Docker_Linux_docker_arm64 17114

                    
                      51f3d9893db86a392fa9064ae9bce74bae887273:2023-08-31:30790
                    
                

Test fail (3/319)

Order failed test Duration
25 TestAddons/parallel/Ingress 40.62
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.73
233 TestRunningBinaryUpgrade 454.92
x
+
TestAddons/parallel/Ingress (40.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-435384 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-435384 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-435384 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9c980b87-f450-4922-86d7-b3140fc2433f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9c980b87-f450-4922-86d7-b3140fc2433f] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.013308005s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-435384 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.069174496s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-435384 addons disable ingress-dns --alsologtostderr -v=1: (1.681507239s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-435384 addons disable ingress --alsologtostderr -v=1: (7.761100898s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-435384
helpers_test.go:235: (dbg) docker inspect addons-435384:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7f1606dbfa4be86051cf7ec59f52d0f93c3ca025f00fae50bc8dde28940241d8",
	        "Created": "2023-08-30T22:54:38.481040651Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1503265,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-30T22:54:38.803748699Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:879c6efc994c345ac84dd4ebb4fc5b49dd2a4b340e335879382e51233f79b51a",
	        "ResolvConfPath": "/var/lib/docker/containers/7f1606dbfa4be86051cf7ec59f52d0f93c3ca025f00fae50bc8dde28940241d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7f1606dbfa4be86051cf7ec59f52d0f93c3ca025f00fae50bc8dde28940241d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7f1606dbfa4be86051cf7ec59f52d0f93c3ca025f00fae50bc8dde28940241d8/hosts",
	        "LogPath": "/var/lib/docker/containers/7f1606dbfa4be86051cf7ec59f52d0f93c3ca025f00fae50bc8dde28940241d8/7f1606dbfa4be86051cf7ec59f52d0f93c3ca025f00fae50bc8dde28940241d8-json.log",
	        "Name": "/addons-435384",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-435384:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-435384",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ce39cd0df81b284aae3bbe1a87e05bd8fe26c5b6084116f91d9b799d82c02f3a-init/diff:/var/lib/docker/overlay2/ef055cb4b9f7ea74c3fdc71828094f56839d9c7e7022b41a5ab3cc1d5d79c8a3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce39cd0df81b284aae3bbe1a87e05bd8fe26c5b6084116f91d9b799d82c02f3a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce39cd0df81b284aae3bbe1a87e05bd8fe26c5b6084116f91d9b799d82c02f3a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce39cd0df81b284aae3bbe1a87e05bd8fe26c5b6084116f91d9b799d82c02f3a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-435384",
	                "Source": "/var/lib/docker/volumes/addons-435384/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-435384",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-435384",
	                "name.minikube.sigs.k8s.io": "addons-435384",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb312f9749eaf32439389da7c5b2ec03b7155b9a0bf3da8b699240e85c4adc4b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34337"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34336"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34333"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34335"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34334"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bb312f9749ea",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-435384": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7f1606dbfa4b",
	                        "addons-435384"
	                    ],
	                    "NetworkID": "c6f1eae03ca5f84d77d7fc5c2aa6c8572385d8719e727616cfdfcb99abab9b98",
	                    "EndpointID": "a8665c76ed687e582cd7fe253389498be96ee810a2e4d98b17c9f375fe4de5db",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-435384 -n addons-435384
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-435384 logs -n 25: (1.340247031s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-850644   | jenkins | v1.31.2 | 30 Aug 23 22:53 UTC |                     |
	|         | -p download-only-850644        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-850644   | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC |                     |
	|         | -p download-only-850644        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC | 30 Aug 23 22:54 UTC |
	| delete  | -p download-only-850644        | download-only-850644   | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC | 30 Aug 23 22:54 UTC |
	| delete  | -p download-only-850644        | download-only-850644   | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC | 30 Aug 23 22:54 UTC |
	| start   | --download-only -p             | download-docker-024755 | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC |                     |
	|         | download-docker-024755         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	| delete  | -p download-docker-024755      | download-docker-024755 | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC | 30 Aug 23 22:54 UTC |
	| start   | --download-only -p             | binary-mirror-710377   | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC |                     |
	|         | binary-mirror-710377           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45655         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-710377        | binary-mirror-710377   | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC | 30 Aug 23 22:54 UTC |
	| start   | -p addons-435384               | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC | 30 Aug 23 22:56 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=docker     |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:56 UTC | 30 Aug 23 22:56 UTC |
	|         | addons-435384                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:56 UTC | 30 Aug 23 22:56 UTC |
	|         | -p addons-435384               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-435384 ip               | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:56 UTC | 30 Aug 23 22:56 UTC |
	| addons  | addons-435384 addons disable   | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:56 UTC | 30 Aug 23 22:56 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:57 UTC | 30 Aug 23 22:57 UTC |
	|         | addons-435384                  |                        |         |         |                     |                     |
	| addons  | addons-435384 addons           | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:57 UTC | 30 Aug 23 22:57 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ssh     | addons-435384 ssh curl -s      | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:57 UTC | 30 Aug 23 22:57 UTC |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| ip      | addons-435384 ip               | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:57 UTC | 30 Aug 23 22:57 UTC |
	| addons  | addons-435384 addons disable   | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:57 UTC | 30 Aug 23 22:57 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-435384 addons disable   | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:57 UTC | 30 Aug 23 22:57 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	| addons  | addons-435384 addons           | addons-435384          | jenkins | v1.31.2 | 30 Aug 23 22:57 UTC |                     |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:54:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:54:14.538573 1502803 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:54:14.538724 1502803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:54:14.538734 1502803 out.go:309] Setting ErrFile to fd 2...
	I0830 22:54:14.538740 1502803 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:54:14.539006 1502803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
	I0830 22:54:14.539426 1502803 out.go:303] Setting JSON to false
	I0830 22:54:14.540337 1502803 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27390,"bootTime":1693408664,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0830 22:54:14.540407 1502803 start.go:138] virtualization:  
	I0830 22:54:14.543417 1502803 out.go:177] * [addons-435384] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 22:54:14.546108 1502803 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 22:54:14.548194 1502803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:54:14.546327 1502803 notify.go:220] Checking for updates...
	I0830 22:54:14.551999 1502803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	I0830 22:54:14.553862 1502803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	I0830 22:54:14.555633 1502803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 22:54:14.557492 1502803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 22:54:14.559336 1502803 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:54:14.583248 1502803 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 22:54:14.583337 1502803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:54:14.663211 1502803 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-30 22:54:14.653232848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:54:14.663320 1502803 docker.go:294] overlay module found
	I0830 22:54:14.665594 1502803 out.go:177] * Using the docker driver based on user configuration
	I0830 22:54:14.667396 1502803 start.go:298] selected driver: docker
	I0830 22:54:14.667414 1502803 start.go:902] validating driver "docker" against <nil>
	I0830 22:54:14.667428 1502803 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 22:54:14.668021 1502803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:54:14.728704 1502803 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-30 22:54:14.71979873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:54:14.728863 1502803 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 22:54:14.729076 1502803 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 22:54:14.730996 1502803 out.go:177] * Using Docker driver with root privileges
	I0830 22:54:14.732766 1502803 cni.go:84] Creating CNI manager for ""
	I0830 22:54:14.732802 1502803 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0830 22:54:14.732820 1502803 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0830 22:54:14.732831 1502803 start_flags.go:319] config:
	{Name:addons-435384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-435384 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:54:14.734955 1502803 out.go:177] * Starting control plane node addons-435384 in cluster addons-435384
	I0830 22:54:14.736781 1502803 cache.go:122] Beginning downloading kic base image for docker with docker
	I0830 22:54:14.738606 1502803 out.go:177] * Pulling base image ...
	I0830 22:54:14.740434 1502803 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 22:54:14.740459 1502803 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local docker daemon
	I0830 22:54:14.740490 1502803 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0830 22:54:14.740499 1502803 cache.go:57] Caching tarball of preloaded images
	I0830 22:54:14.740571 1502803 preload.go:174] Found /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0830 22:54:14.740580 1502803 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0830 22:54:14.740974 1502803 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/config.json ...
	I0830 22:54:14.741003 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/config.json: {Name:mkeaa98e97f4b0b09a05cc8f071a80a5e2b942e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:14.757340 1502803 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec to local cache
	I0830 22:54:14.757464 1502803 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local cache directory
	I0830 22:54:14.757490 1502803 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local cache directory, skipping pull
	I0830 22:54:14.757496 1502803 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec exists in cache, skipping pull
	I0830 22:54:14.757507 1502803 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec as a tarball
	I0830 22:54:14.757512 1502803 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec from local cache
	I0830 22:54:30.242061 1502803 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec from cached tarball
	I0830 22:54:30.242100 1502803 cache.go:195] Successfully downloaded all kic artifacts
	I0830 22:54:30.242159 1502803 start.go:365] acquiring machines lock for addons-435384: {Name:mk4e4b46568587b0f53173ac539c458817e54465 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 22:54:30.242273 1502803 start.go:369] acquired machines lock for "addons-435384" in 89.543µs
	I0830 22:54:30.242306 1502803 start.go:93] Provisioning new machine with config: &{Name:addons-435384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-435384 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0830 22:54:30.242391 1502803 start.go:125] createHost starting for "" (driver="docker")
	I0830 22:54:30.244685 1502803 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0830 22:54:30.244921 1502803 start.go:159] libmachine.API.Create for "addons-435384" (driver="docker")
	I0830 22:54:30.244959 1502803 client.go:168] LocalClient.Create starting
	I0830 22:54:30.245065 1502803 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem
	I0830 22:54:30.922775 1502803 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem
	I0830 22:54:32.483573 1502803 cli_runner.go:164] Run: docker network inspect addons-435384 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0830 22:54:32.506914 1502803 cli_runner.go:211] docker network inspect addons-435384 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0830 22:54:32.506990 1502803 network_create.go:281] running [docker network inspect addons-435384] to gather additional debugging logs...
	I0830 22:54:32.507009 1502803 cli_runner.go:164] Run: docker network inspect addons-435384
	W0830 22:54:32.528772 1502803 cli_runner.go:211] docker network inspect addons-435384 returned with exit code 1
	I0830 22:54:32.528813 1502803 network_create.go:284] error running [docker network inspect addons-435384]: docker network inspect addons-435384: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-435384 not found
	I0830 22:54:32.528827 1502803 network_create.go:286] output of [docker network inspect addons-435384]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-435384 not found
	
	** /stderr **
	I0830 22:54:32.528894 1502803 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 22:54:32.548350 1502803 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002bf4830}
	I0830 22:54:32.548389 1502803 network_create.go:123] attempt to create docker network addons-435384 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0830 22:54:32.548460 1502803 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-435384 addons-435384
	I0830 22:54:32.636968 1502803 network_create.go:107] docker network addons-435384 192.168.49.0/24 created
	I0830 22:54:32.637018 1502803 kic.go:117] calculated static IP "192.168.49.2" for the "addons-435384" container
	I0830 22:54:32.637103 1502803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0830 22:54:32.659453 1502803 cli_runner.go:164] Run: docker volume create addons-435384 --label name.minikube.sigs.k8s.io=addons-435384 --label created_by.minikube.sigs.k8s.io=true
	I0830 22:54:32.689514 1502803 oci.go:103] Successfully created a docker volume addons-435384
	I0830 22:54:32.689972 1502803 cli_runner.go:164] Run: docker run --rm --name addons-435384-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-435384 --entrypoint /usr/bin/test -v addons-435384:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec -d /var/lib
	I0830 22:54:34.609465 1502803 cli_runner.go:217] Completed: docker run --rm --name addons-435384-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-435384 --entrypoint /usr/bin/test -v addons-435384:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec -d /var/lib: (1.919432623s)
	I0830 22:54:34.609492 1502803 oci.go:107] Successfully prepared a docker volume addons-435384
	I0830 22:54:34.609514 1502803 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 22:54:34.609532 1502803 kic.go:190] Starting extracting preloaded images to volume ...
	I0830 22:54:34.609622 1502803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-435384:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec -I lz4 -xf /preloaded.tar -C /extractDir
	I0830 22:54:38.397564 1502803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-435384:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec -I lz4 -xf /preloaded.tar -C /extractDir: (3.787901222s)
	I0830 22:54:38.397602 1502803 kic.go:199] duration metric: took 3.788066 seconds to extract preloaded images to volume
	W0830 22:54:38.397743 1502803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0830 22:54:38.397849 1502803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0830 22:54:38.465529 1502803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-435384 --name addons-435384 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-435384 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-435384 --network addons-435384 --ip 192.168.49.2 --volume addons-435384:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec
	I0830 22:54:38.812769 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Running}}
	I0830 22:54:38.837754 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:54:38.863878 1502803 cli_runner.go:164] Run: docker exec addons-435384 stat /var/lib/dpkg/alternatives/iptables
	I0830 22:54:38.927003 1502803 oci.go:144] the created container "addons-435384" has a running status.
	I0830 22:54:38.927034 1502803 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa...
	I0830 22:54:39.772768 1502803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0830 22:54:39.809290 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:54:39.831494 1502803 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0830 22:54:39.831516 1502803 kic_runner.go:114] Args: [docker exec --privileged addons-435384 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0830 22:54:39.917403 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:54:39.939346 1502803 machine.go:88] provisioning docker machine ...
	I0830 22:54:39.939377 1502803 ubuntu.go:169] provisioning hostname "addons-435384"
	I0830 22:54:39.939449 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:39.959282 1502803 main.go:141] libmachine: Using SSH client type: native
	I0830 22:54:39.959729 1502803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34337 <nil> <nil>}
	I0830 22:54:39.959746 1502803 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-435384 && echo "addons-435384" | sudo tee /etc/hostname
	I0830 22:54:40.129789 1502803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-435384
	
	I0830 22:54:40.129928 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:40.147311 1502803 main.go:141] libmachine: Using SSH client type: native
	I0830 22:54:40.147745 1502803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34337 <nil> <nil>}
	I0830 22:54:40.147764 1502803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-435384' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-435384/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-435384' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 22:54:40.290119 1502803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 22:54:40.290144 1502803 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17114-1496922/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-1496922/.minikube}
	I0830 22:54:40.290165 1502803 ubuntu.go:177] setting up certificates
	I0830 22:54:40.290174 1502803 provision.go:83] configureAuth start
	I0830 22:54:40.290232 1502803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-435384
	I0830 22:54:40.308049 1502803 provision.go:138] copyHostCerts
	I0830 22:54:40.308116 1502803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.pem (1082 bytes)
	I0830 22:54:40.308226 1502803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-1496922/.minikube/cert.pem (1123 bytes)
	I0830 22:54:40.308339 1502803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-1496922/.minikube/key.pem (1679 bytes)
	I0830 22:54:40.308381 1502803 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca-key.pem org=jenkins.addons-435384 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-435384]
	I0830 22:54:41.499764 1502803 provision.go:172] copyRemoteCerts
	I0830 22:54:41.499829 1502803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 22:54:41.499873 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:41.518600 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:54:41.619151 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0830 22:54:41.645599 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 22:54:41.672069 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 22:54:41.698671 1502803 provision.go:86] duration metric: configureAuth took 1.408483963s
	I0830 22:54:41.698694 1502803 ubuntu.go:193] setting minikube options for container-runtime
	I0830 22:54:41.698888 1502803 config.go:182] Loaded profile config "addons-435384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 22:54:41.698939 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:41.718744 1502803 main.go:141] libmachine: Using SSH client type: native
	I0830 22:54:41.719164 1502803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34337 <nil> <nil>}
	I0830 22:54:41.719174 1502803 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0830 22:54:41.866439 1502803 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0830 22:54:41.866496 1502803 ubuntu.go:71] root file system type: overlay
	I0830 22:54:41.866631 1502803 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0830 22:54:41.866704 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:41.889792 1502803 main.go:141] libmachine: Using SSH client type: native
	I0830 22:54:41.890221 1502803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34337 <nil> <nil>}
	I0830 22:54:41.890308 1502803 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0830 22:54:42.047154 1502803 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0830 22:54:42.047259 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:42.066584 1502803 main.go:141] libmachine: Using SSH client type: native
	I0830 22:54:42.067012 1502803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34337 <nil> <nil>}
	I0830 22:54:42.067040 1502803 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0830 22:54:42.859513 1502803 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-21 20:33:53.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-08-30 22:54:42.043429372 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0830 22:54:42.859541 1502803 machine.go:91] provisioned docker machine in 2.920174133s
	I0830 22:54:42.859551 1502803 client.go:171] LocalClient.Create took 12.614586912s
	I0830 22:54:42.859573 1502803 start.go:167] duration metric: libmachine.API.Create for "addons-435384" took 12.614648902s
	I0830 22:54:42.859582 1502803 start.go:300] post-start starting for "addons-435384" (driver="docker")
	I0830 22:54:42.859591 1502803 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 22:54:42.859669 1502803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 22:54:42.859719 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:42.877812 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:54:42.979446 1502803 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 22:54:42.983483 1502803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 22:54:42.983521 1502803 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 22:54:42.983533 1502803 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 22:54:42.983539 1502803 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0830 22:54:42.983548 1502803 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-1496922/.minikube/addons for local assets ...
	I0830 22:54:42.983615 1502803 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-1496922/.minikube/files for local assets ...
	I0830 22:54:42.983645 1502803 start.go:303] post-start completed in 124.056566ms
	I0830 22:54:42.983964 1502803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-435384
	I0830 22:54:43.001251 1502803 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/config.json ...
	I0830 22:54:43.001536 1502803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 22:54:43.001594 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:43.019112 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:54:43.114971 1502803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 22:54:43.120432 1502803 start.go:128] duration metric: createHost completed in 12.87802628s
	I0830 22:54:43.120455 1502803 start.go:83] releasing machines lock for "addons-435384", held for 12.878169985s
	I0830 22:54:43.120528 1502803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-435384
	I0830 22:54:43.137944 1502803 ssh_runner.go:195] Run: cat /version.json
	I0830 22:54:43.137994 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:43.138007 1502803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 22:54:43.138068 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:54:43.162458 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:54:43.179445 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:54:43.388058 1502803 ssh_runner.go:195] Run: systemctl --version
	I0830 22:54:43.393633 1502803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 22:54:43.399011 1502803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0830 22:54:43.427434 1502803 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0830 22:54:43.427560 1502803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0830 22:54:43.461394 1502803 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0830 22:54:43.461418 1502803 start.go:466] detecting cgroup driver to use...
	I0830 22:54:43.461448 1502803 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 22:54:43.461557 1502803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:54:43.480439 1502803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0830 22:54:43.491948 1502803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0830 22:54:43.503005 1502803 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0830 22:54:43.503082 1502803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0830 22:54:43.514174 1502803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 22:54:43.525377 1502803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0830 22:54:43.536398 1502803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 22:54:43.547330 1502803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 22:54:43.557688 1502803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0830 22:54:43.568624 1502803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 22:54:43.578243 1502803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 22:54:43.587942 1502803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:54:43.679948 1502803 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0830 22:54:43.796169 1502803 start.go:466] detecting cgroup driver to use...
	I0830 22:54:43.796210 1502803 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 22:54:43.796261 1502803 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0830 22:54:43.810558 1502803 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0830 22:54:43.810621 1502803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0830 22:54:43.824243 1502803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 22:54:43.842360 1502803 ssh_runner.go:195] Run: which cri-dockerd
	I0830 22:54:43.846860 1502803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0830 22:54:43.856587 1502803 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0830 22:54:43.880040 1502803 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0830 22:54:44.000227 1502803 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0830 22:54:44.106142 1502803 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0830 22:54:44.106174 1502803 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0830 22:54:44.128249 1502803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:54:44.220454 1502803 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0830 22:54:44.487891 1502803 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0830 22:54:44.596721 1502803 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0830 22:54:44.694086 1502803 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0830 22:54:44.786642 1502803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:54:44.891122 1502803 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0830 22:54:44.909228 1502803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 22:54:45.008135 1502803 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0830 22:54:45.093557 1502803 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0830 22:54:45.093722 1502803 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0830 22:54:45.099414 1502803 start.go:534] Will wait 60s for crictl version
	I0830 22:54:45.099574 1502803 ssh_runner.go:195] Run: which crictl
	I0830 22:54:45.104993 1502803 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0830 22:54:45.161566 1502803 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1
	I0830 22:54:45.161691 1502803 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0830 22:54:45.194566 1502803 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0830 22:54:45.224495 1502803 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0830 22:54:45.224658 1502803 cli_runner.go:164] Run: docker network inspect addons-435384 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 22:54:45.243944 1502803 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0830 22:54:45.248801 1502803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:54:45.264537 1502803 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 22:54:45.264607 1502803 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0830 22:54:45.287113 1502803 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0830 22:54:45.287137 1502803 docker.go:566] Images already preloaded, skipping extraction
	I0830 22:54:45.287201 1502803 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0830 22:54:45.312411 1502803 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0830 22:54:45.312443 1502803 cache_images.go:84] Images are preloaded, skipping loading
	I0830 22:54:45.312502 1502803 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0830 22:54:45.374096 1502803 cni.go:84] Creating CNI manager for ""
	I0830 22:54:45.374122 1502803 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0830 22:54:45.374154 1502803 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 22:54:45.374172 1502803 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-435384 NodeName:addons-435384 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0830 22:54:45.374320 1502803 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-435384"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 22:54:45.374390 1502803 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-435384 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-435384 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 22:54:45.374454 1502803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0830 22:54:45.384751 1502803 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 22:54:45.384822 1502803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 22:54:45.394817 1502803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0830 22:54:45.415194 1502803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0830 22:54:45.435540 1502803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0830 22:54:45.456154 1502803 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0830 22:54:45.460503 1502803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 22:54:45.473721 1502803 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384 for IP: 192.168.49.2
	I0830 22:54:45.473800 1502803 certs.go:190] acquiring lock for shared ca certs: {Name:mkb3bc561ee04b0a6895c261d3178d0156e44f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:45.474763 1502803 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.key
	I0830 22:54:45.836696 1502803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt ...
	I0830 22:54:45.836725 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt: {Name:mk8069502483ee522f174030e184bedb985da1d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:45.836909 1502803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.key ...
	I0830 22:54:45.836922 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.key: {Name:mk170f58cce330c7a51ca7cc3881dd5b7486388c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:45.837006 1502803 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.key
	I0830 22:54:46.202761 1502803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.crt ...
	I0830 22:54:46.202790 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.crt: {Name:mk373864bebcb6125d66d0123ea4c2394266d178 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:46.203568 1502803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.key ...
	I0830 22:54:46.203583 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.key: {Name:mk5d07951db80f2d789205c5e3a267d1ae3558f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:46.203708 1502803 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.key
	I0830 22:54:46.203724 1502803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt with IP's: []
	I0830 22:54:46.798216 1502803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt ...
	I0830 22:54:46.798246 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: {Name:mkf57e93bf607837aca22df266b3699dd142838a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:46.798430 1502803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.key ...
	I0830 22:54:46.798449 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.key: {Name:mk37285625b6672e22c0ea48cb7c0ef26ddb75cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:46.798530 1502803 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.key.dd3b5fb2
	I0830 22:54:46.798550 1502803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 22:54:47.403813 1502803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.crt.dd3b5fb2 ...
	I0830 22:54:47.403850 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.crt.dd3b5fb2: {Name:mk76b74fd2db62769f76c70f6e8df23d82ed38c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:47.404572 1502803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.key.dd3b5fb2 ...
	I0830 22:54:47.404591 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.key.dd3b5fb2: {Name:mkf2d8d5c5a782b4e318e3507c022ac1961582c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:47.404682 1502803 certs.go:337] copying /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.crt
	I0830 22:54:47.404754 1502803 certs.go:341] copying /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.key
	I0830 22:54:47.404803 1502803 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/proxy-client.key
	I0830 22:54:47.404823 1502803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/proxy-client.crt with IP's: []
	I0830 22:54:47.784916 1502803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/proxy-client.crt ...
	I0830 22:54:47.784946 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/proxy-client.crt: {Name:mk4b6ad919f90bd70a41e11c00c8e1f4155461ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:47.785543 1502803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/proxy-client.key ...
	I0830 22:54:47.785558 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/proxy-client.key: {Name:mka7ee183c285e0524f9af751f916bb327e2de94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:54:47.785776 1502803 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca-key.pem (1675 bytes)
	I0830 22:54:47.785818 1502803 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem (1082 bytes)
	I0830 22:54:47.785847 1502803 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem (1123 bytes)
	I0830 22:54:47.785877 1502803 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/key.pem (1679 bytes)
	I0830 22:54:47.786516 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 22:54:47.814622 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 22:54:47.842350 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 22:54:47.870364 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 22:54:47.897638 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 22:54:47.924698 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 22:54:47.951717 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 22:54:47.978993 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 22:54:48.007178 1502803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 22:54:48.035452 1502803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 22:54:48.056023 1502803 ssh_runner.go:195] Run: openssl version
	I0830 22:54:48.063244 1502803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 22:54:48.074724 1502803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:54:48.079456 1502803 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:54:48.079520 1502803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 22:54:48.087936 1502803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 22:54:48.099239 1502803 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 22:54:48.103574 1502803 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 22:54:48.103620 1502803 kubeadm.go:404] StartCluster: {Name:addons-435384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-435384 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:54:48.103739 1502803 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0830 22:54:48.123193 1502803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 22:54:48.133711 1502803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 22:54:48.143711 1502803 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0830 22:54:48.143804 1502803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 22:54:48.154117 1502803 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 22:54:48.154193 1502803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0830 22:54:48.207114 1502803 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0830 22:54:48.207404 1502803 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 22:54:48.268361 1502803 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0830 22:54:48.268546 1502803 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1043-aws
	I0830 22:54:48.268614 1502803 kubeadm.go:322] OS: Linux
	I0830 22:54:48.268683 1502803 kubeadm.go:322] CGROUPS_CPU: enabled
	I0830 22:54:48.268756 1502803 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0830 22:54:48.268826 1502803 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0830 22:54:48.268904 1502803 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0830 22:54:48.268974 1502803 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0830 22:54:48.269063 1502803 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0830 22:54:48.269144 1502803 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0830 22:54:48.269214 1502803 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0830 22:54:48.269283 1502803 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0830 22:54:48.344919 1502803 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 22:54:48.345093 1502803 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 22:54:48.345278 1502803 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 22:54:48.676665 1502803 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 22:54:48.680437 1502803 out.go:204]   - Generating certificates and keys ...
	I0830 22:54:48.680565 1502803 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 22:54:48.680630 1502803 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 22:54:49.067050 1502803 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 22:54:49.293709 1502803 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 22:54:49.654929 1502803 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 22:54:50.402314 1502803 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 22:54:50.830662 1502803 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 22:54:50.830979 1502803 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-435384 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0830 22:54:51.213768 1502803 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 22:54:51.214141 1502803 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-435384 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0830 22:54:51.437218 1502803 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 22:54:51.579859 1502803 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 22:54:52.159586 1502803 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 22:54:52.159916 1502803 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 22:54:52.670213 1502803 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 22:54:53.027150 1502803 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 22:54:53.995797 1502803 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 22:54:54.751116 1502803 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 22:54:54.751702 1502803 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 22:54:54.756134 1502803 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 22:54:54.759067 1502803 out.go:204]   - Booting up control plane ...
	I0830 22:54:54.759198 1502803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 22:54:54.759274 1502803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 22:54:54.759594 1502803 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 22:54:54.775446 1502803 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 22:54:54.775546 1502803 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 22:54:54.775588 1502803 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 22:54:54.886995 1502803 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 22:55:02.389580 1502803 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502627 seconds
	I0830 22:55:02.389705 1502803 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 22:55:02.409183 1502803 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 22:55:02.935752 1502803 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 22:55:02.935934 1502803 kubeadm.go:322] [mark-control-plane] Marking the node addons-435384 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0830 22:55:03.447503 1502803 kubeadm.go:322] [bootstrap-token] Using token: kejal4.cmnvxhlc2wmqz9s0
	I0830 22:55:03.449789 1502803 out.go:204]   - Configuring RBAC rules ...
	I0830 22:55:03.449901 1502803 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 22:55:03.456412 1502803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 22:55:03.465529 1502803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 22:55:03.469813 1502803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 22:55:03.474272 1502803 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 22:55:03.478201 1502803 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 22:55:03.491703 1502803 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 22:55:03.721548 1502803 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 22:55:03.862943 1502803 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 22:55:03.864721 1502803 kubeadm.go:322] 
	I0830 22:55:03.864787 1502803 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 22:55:03.864793 1502803 kubeadm.go:322] 
	I0830 22:55:03.864865 1502803 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 22:55:03.864869 1502803 kubeadm.go:322] 
	I0830 22:55:03.864894 1502803 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 22:55:03.865740 1502803 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 22:55:03.865795 1502803 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 22:55:03.865800 1502803 kubeadm.go:322] 
	I0830 22:55:03.865851 1502803 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0830 22:55:03.865855 1502803 kubeadm.go:322] 
	I0830 22:55:03.865900 1502803 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0830 22:55:03.865904 1502803 kubeadm.go:322] 
	I0830 22:55:03.865953 1502803 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 22:55:03.866024 1502803 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 22:55:03.866087 1502803 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 22:55:03.866092 1502803 kubeadm.go:322] 
	I0830 22:55:03.866634 1502803 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 22:55:03.866713 1502803 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 22:55:03.866717 1502803 kubeadm.go:322] 
	I0830 22:55:03.867302 1502803 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kejal4.cmnvxhlc2wmqz9s0 \
	I0830 22:55:03.867404 1502803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:631643f16b21814ec8cd841eb99cc8a19ba92b2dc9b8745ca1e490484be9b150 \
	I0830 22:55:03.867671 1502803 kubeadm.go:322] 	--control-plane 
	I0830 22:55:03.867680 1502803 kubeadm.go:322] 
	I0830 22:55:03.867967 1502803 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 22:55:03.868015 1502803 kubeadm.go:322] 
	I0830 22:55:03.868282 1502803 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kejal4.cmnvxhlc2wmqz9s0 \
	I0830 22:55:03.868599 1502803 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:631643f16b21814ec8cd841eb99cc8a19ba92b2dc9b8745ca1e490484be9b150 
	I0830 22:55:03.872115 1502803 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1043-aws\n", err: exit status 1
	I0830 22:55:03.872223 1502803 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 22:55:03.872237 1502803 cni.go:84] Creating CNI manager for ""
	I0830 22:55:03.872253 1502803 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0830 22:55:03.874837 1502803 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0830 22:55:03.876747 1502803 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0830 22:55:03.888531 1502803 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0830 22:55:03.921798 1502803 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 22:55:03.921916 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:03.921988 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=addons-435384 minikube.k8s.io/updated_at=2023_08_30T22_55_03_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:04.236449 1502803 ops.go:34] apiserver oom_adj: -16
	I0830 22:55:04.236598 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:04.337831 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:04.936221 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:05.436056 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:05.935586 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:06.435719 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:06.935648 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:07.435415 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:07.936029 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:08.436045 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:08.936076 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:09.436216 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:09.935403 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:10.435377 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:10.935867 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:11.435426 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:11.936327 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:12.435511 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:12.935557 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:13.436143 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:13.936000 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:14.436297 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:14.935803 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:15.436302 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:15.936007 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:16.435655 1502803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 22:55:16.569976 1502803 kubeadm.go:1081] duration metric: took 12.648101144s to wait for elevateKubeSystemPrivileges.
	I0830 22:55:16.570005 1502803 kubeadm.go:406] StartCluster complete in 28.466390948s
	I0830 22:55:16.570020 1502803 settings.go:142] acquiring lock: {Name:mk4f2036520f4cce49c9f101737e8fce8f8975fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:55:16.570138 1502803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-1496922/kubeconfig
	I0830 22:55:16.570583 1502803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/kubeconfig: {Name:mkf4ec4235f416d6c5c702dfdbfaa4d81e4df4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:55:16.572670 1502803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 22:55:16.572927 1502803 config.go:182] Loaded profile config "addons-435384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 22:55:16.572961 1502803 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0830 22:55:16.573038 1502803 addons.go:69] Setting volumesnapshots=true in profile "addons-435384"
	I0830 22:55:16.573054 1502803 addons.go:231] Setting addon volumesnapshots=true in "addons-435384"
	I0830 22:55:16.573109 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.573322 1502803 addons.go:69] Setting cloud-spanner=true in profile "addons-435384"
	I0830 22:55:16.573341 1502803 addons.go:231] Setting addon cloud-spanner=true in "addons-435384"
	I0830 22:55:16.573379 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.573549 1502803 addons.go:69] Setting ingress-dns=true in profile "addons-435384"
	I0830 22:55:16.573561 1502803 addons.go:231] Setting addon ingress-dns=true in "addons-435384"
	I0830 22:55:16.573602 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.573849 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.573997 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.574068 1502803 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-435384"
	I0830 22:55:16.574096 1502803 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-435384"
	I0830 22:55:16.574125 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.574575 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.574664 1502803 addons.go:69] Setting default-storageclass=true in profile "addons-435384"
	I0830 22:55:16.574677 1502803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-435384"
	I0830 22:55:16.574897 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.574963 1502803 addons.go:69] Setting gcp-auth=true in profile "addons-435384"
	I0830 22:55:16.574979 1502803 mustload.go:65] Loading cluster: addons-435384
	I0830 22:55:16.575114 1502803 config.go:182] Loaded profile config "addons-435384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 22:55:16.575337 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.575409 1502803 addons.go:69] Setting ingress=true in profile "addons-435384"
	I0830 22:55:16.575420 1502803 addons.go:231] Setting addon ingress=true in "addons-435384"
	I0830 22:55:16.575453 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.575805 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.576043 1502803 addons.go:69] Setting registry=true in profile "addons-435384"
	I0830 22:55:16.576059 1502803 addons.go:231] Setting addon registry=true in "addons-435384"
	I0830 22:55:16.576141 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.576591 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.576697 1502803 addons.go:69] Setting inspektor-gadget=true in profile "addons-435384"
	I0830 22:55:16.576714 1502803 addons.go:231] Setting addon inspektor-gadget=true in "addons-435384"
	I0830 22:55:16.576739 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.577103 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.577203 1502803 addons.go:69] Setting metrics-server=true in profile "addons-435384"
	I0830 22:55:16.577219 1502803 addons.go:231] Setting addon metrics-server=true in "addons-435384"
	I0830 22:55:16.577248 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.577638 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.577952 1502803 addons.go:69] Setting storage-provisioner=true in profile "addons-435384"
	I0830 22:55:16.577972 1502803 addons.go:231] Setting addon storage-provisioner=true in "addons-435384"
	I0830 22:55:16.578003 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.618934 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.629663 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.706136 1502803 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0830 22:55:16.708825 1502803 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0830 22:55:16.708881 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0830 22:55:16.708986 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.709287 1502803 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-435384" context rescaled to 1 replicas
	I0830 22:55:16.709349 1502803 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0830 22:55:16.712702 1502803 out.go:177] * Verifying Kubernetes components...
	I0830 22:55:16.714922 1502803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:55:16.724902 1502803 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0830 22:55:16.728602 1502803 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0830 22:55:16.728665 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0830 22:55:16.728763 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.794005 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0830 22:55:16.799188 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0830 22:55:16.801728 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0830 22:55:16.805367 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0830 22:55:16.808244 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0830 22:55:16.810669 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0830 22:55:16.814143 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0830 22:55:16.812391 1502803 addons.go:231] Setting addon default-storageclass=true in "addons-435384"
	I0830 22:55:16.814658 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.825420 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0830 22:55:16.822052 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:16.822498 1502803 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0830 22:55:16.831885 1502803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0830 22:55:16.831900 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0830 22:55:16.831952 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.829747 1502803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0830 22:55:16.833979 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0830 22:55:16.834086 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.830257 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:16.838062 1502803 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 22:55:16.840119 1502803 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:55:16.840136 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 22:55:16.840240 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.843803 1502803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 22:55:16.846080 1502803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 22:55:16.848267 1502803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0830 22:55:16.850581 1502803 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0830 22:55:16.850599 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0830 22:55:16.850654 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.923708 1502803 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0830 22:55:16.933481 1502803 out.go:177]   - Using image docker.io/registry:2.8.1
	I0830 22:55:16.936272 1502803 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0830 22:55:16.936349 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0830 22:55:16.936564 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.947152 1502803 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0830 22:55:16.950424 1502803 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0830 22:55:16.954185 1502803 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0830 22:55:16.954209 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0830 22:55:16.954269 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.951478 1502803 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0830 22:55:16.954367 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0830 22:55:16.954392 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:16.977655 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.001115 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.035187 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.059526 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.080430 1502803 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 22:55:17.080450 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 22:55:17.080509 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:17.094452 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.098589 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.123933 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.154954 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.155812 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.173986 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:17.204381 1502803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 22:55:17.205215 1502803 node_ready.go:35] waiting up to 6m0s for node "addons-435384" to be "Ready" ...
	I0830 22:55:17.208883 1502803 node_ready.go:49] node "addons-435384" has status "Ready":"True"
	I0830 22:55:17.208943 1502803 node_ready.go:38] duration metric: took 3.706735ms waiting for node "addons-435384" to be "Ready" ...
	I0830 22:55:17.208966 1502803 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:55:17.218492 1502803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2d7km" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:17.925803 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0830 22:55:17.938733 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0830 22:55:17.959930 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 22:55:18.015513 1502803 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0830 22:55:18.015534 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0830 22:55:18.019648 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0830 22:55:18.128967 1502803 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0830 22:55:18.128989 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0830 22:55:18.226861 1502803 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0830 22:55:18.226883 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0830 22:55:18.274328 1502803 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0830 22:55:18.274351 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0830 22:55:18.280152 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 22:55:18.324862 1502803 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0830 22:55:18.324886 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0830 22:55:18.445807 1502803 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0830 22:55:18.445827 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0830 22:55:18.496338 1502803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0830 22:55:18.496362 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0830 22:55:18.508993 1502803 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0830 22:55:18.509016 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0830 22:55:18.559188 1502803 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0830 22:55:18.559211 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0830 22:55:18.655933 1502803 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0830 22:55:18.656000 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0830 22:55:18.667855 1502803 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0830 22:55:18.667916 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0830 22:55:18.718114 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0830 22:55:18.753116 1502803 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0830 22:55:18.753212 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0830 22:55:18.756308 1502803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0830 22:55:18.756332 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0830 22:55:18.845802 1502803 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:55:18.845869 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0830 22:55:18.875100 1502803 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0830 22:55:18.875124 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0830 22:55:19.049725 1502803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0830 22:55:19.049785 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0830 22:55:19.197284 1502803 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0830 22:55:19.197352 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0830 22:55:19.241703 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-2d7km" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:19.252007 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0830 22:55:19.267076 1502803 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0830 22:55:19.267101 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0830 22:55:19.276965 1502803 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0830 22:55:19.276990 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0830 22:55:19.436553 1502803 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0830 22:55:19.436577 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0830 22:55:19.440347 1502803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0830 22:55:19.440369 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0830 22:55:19.480577 1502803 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 22:55:19.480610 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0830 22:55:19.584474 1502803 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.38006109s)
	I0830 22:55:19.584513 1502803 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0830 22:55:19.655606 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 22:55:19.667886 1502803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0830 22:55:19.667910 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0830 22:55:19.677566 1502803 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0830 22:55:19.677601 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0830 22:55:19.810916 1502803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0830 22:55:19.810983 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0830 22:55:19.815333 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0830 22:55:19.982949 1502803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0830 22:55:19.983008 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0830 22:55:20.076284 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.15044515s)
	I0830 22:55:20.217899 1502803 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0830 22:55:20.217959 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0830 22:55:20.276186 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.337415653s)
	I0830 22:55:20.334905 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0830 22:55:21.440190 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.480223247s)
	I0830 22:55:21.741620 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-2d7km" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:23.432088 1502803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0830 22:55:23.432174 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:23.465562 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:23.922397 1502803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0830 22:55:24.060658 1502803 addons.go:231] Setting addon gcp-auth=true in "addons-435384"
	I0830 22:55:24.060717 1502803 host.go:66] Checking if "addons-435384" exists ...
	I0830 22:55:24.061264 1502803 cli_runner.go:164] Run: docker container inspect addons-435384 --format={{.State.Status}}
	I0830 22:55:24.097265 1502803 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0830 22:55:24.097337 1502803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-435384
	I0830 22:55:24.127600 1502803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34337 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/addons-435384/id_rsa Username:docker}
	I0830 22:55:24.248107 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-2d7km" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:24.527169 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.507484643s)
	I0830 22:55:24.527203 1502803 addons.go:467] Verifying addon ingress=true in "addons-435384"
	I0830 22:55:24.529598 1502803 out.go:177] * Verifying ingress addon...
	I0830 22:55:24.527505 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.247326906s)
	I0830 22:55:24.527558 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.809365794s)
	I0830 22:55:24.527625 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.275585242s)
	I0830 22:55:24.527718 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.872076195s)
	I0830 22:55:24.527789 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.712376248s)
	I0830 22:55:24.531710 1502803 addons.go:467] Verifying addon registry=true in "addons-435384"
	I0830 22:55:24.534395 1502803 out.go:177] * Verifying registry addon...
	I0830 22:55:24.532732 1502803 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0830 22:55:24.532751 1502803 addons.go:467] Verifying addon metrics-server=true in "addons-435384"
	W0830 22:55:24.532804 1502803 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0830 22:55:24.536709 1502803 retry.go:31] will retry after 355.097753ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0830 22:55:24.537642 1502803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0830 22:55:24.543111 1502803 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0830 22:55:24.543134 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:24.544121 1502803 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0830 22:55:24.544140 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:24.548999 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:24.550002 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:24.892401 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0830 22:55:25.055671 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:25.056928 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:25.565912 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:25.566521 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:26.057404 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:26.064730 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:26.336140 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.001125962s)
	I0830 22:55:26.336212 1502803 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-435384"
	I0830 22:55:26.339174 1502803 out.go:177] * Verifying csi-hostpath-driver addon...
	I0830 22:55:26.336393 1502803 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.239106594s)
	I0830 22:55:26.342704 1502803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0830 22:55:26.348875 1502803 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0830 22:55:26.350396 1502803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0830 22:55:26.348805 1502803 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0830 22:55:26.353281 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:26.353535 1502803 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0830 22:55:26.353568 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0830 22:55:26.359616 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:26.456775 1502803 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0830 22:55:26.456795 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0830 22:55:26.535402 1502803 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0830 22:55:26.535465 1502803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0830 22:55:26.557512 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:26.558063 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:26.615013 1502803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0830 22:55:26.741904 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-2d7km" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:26.865547 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:27.056818 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:27.058244 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:27.282155 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.389707809s)
	I0830 22:55:27.366057 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:27.555606 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:27.556728 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:27.868789 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:27.984014 1502803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.368926025s)
	I0830 22:55:27.985488 1502803 addons.go:467] Verifying addon gcp-auth=true in "addons-435384"
	I0830 22:55:27.988111 1502803 out.go:177] * Verifying gcp-auth addon...
	I0830 22:55:27.990861 1502803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0830 22:55:27.995230 1502803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0830 22:55:27.995250 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:27.999891 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:28.055197 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:28.056216 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:28.365266 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:28.503930 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:28.559410 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:28.559787 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:28.741243 1502803 pod_ready.go:97] pod "coredns-5dd5756b68-2d7km" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 22:55:16 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 22:55:16 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 22:55:16 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 22:55:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-30 22:55:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSta
teTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-30 22:55:18 +0000 UTC,FinishedAt:2023-08-30 22:55:28 +0000 UTC,ContainerID:docker://1e26cab5d540ec532933414658697a9905673ba309b6605eaa1ef681d65c6bec,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://1e26cab5d540ec532933414658697a9905673ba309b6605eaa1ef681d65c6bec Started:0x4003ae04a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0830 22:55:28.741277 1502803 pod_ready.go:81] duration metric: took 11.522717107s waiting for pod "coredns-5dd5756b68-2d7km" in "kube-system" namespace to be "Ready" ...
	E0830 22:55:28.741288 1502803 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-2d7km" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 22:55:16 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 22:55:16 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 22:55:16 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-30 22:55:16 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[] PodIP: PodIPs:[] StartTime:2023-08-30 22:55:16 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running
:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-08-30 22:55:18 +0000 UTC,FinishedAt:2023-08-30 22:55:28 +0000 UTC,ContainerID:docker://1e26cab5d540ec532933414658697a9905673ba309b6605eaa1ef681d65c6bec,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:docker://1e26cab5d540ec532933414658697a9905673ba309b6605eaa1ef681d65c6bec Started:0x4003ae04a0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0830 22:55:28.741296 1502803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:28.865424 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:29.003889 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:29.056710 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:29.058199 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:29.366849 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:29.503940 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:29.555711 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:29.557290 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:29.865925 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:30.003694 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:30.056127 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:30.056761 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:30.368398 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:30.504107 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:30.553406 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:30.555209 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:30.761621 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:30.866004 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:31.004218 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:31.055844 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:31.058030 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:31.365471 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:31.503820 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:31.556105 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:31.556776 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:31.864915 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:32.003328 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:32.055817 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:32.056300 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:32.366001 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:32.504942 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:32.553338 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:32.554382 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:32.865960 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:33.003141 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:33.054485 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:33.055392 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:33.259698 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:33.364895 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:33.503410 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:33.554129 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:33.555484 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:33.865320 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:34.004166 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:34.062791 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:34.064046 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:34.365538 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:34.504152 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:34.556812 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:34.557540 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:34.865869 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:35.004678 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:35.055803 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:35.056625 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:35.260260 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:35.366230 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:35.503763 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:35.555699 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:35.556765 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:35.866258 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:36.004493 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:36.055609 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:36.057546 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:36.366015 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:36.503222 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:36.554758 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:36.554996 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:36.867742 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:37.003942 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:37.053776 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:37.054326 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:37.366956 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:37.505251 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:37.558551 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:37.560128 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:37.760705 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:37.866546 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:38.004613 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:38.055511 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:38.057219 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:38.366495 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:38.504283 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:38.555300 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:38.556969 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:38.866384 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:39.004331 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:39.056069 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:39.056544 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:39.366214 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:39.504067 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:39.553758 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:39.555136 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:39.866070 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:40.004389 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:40.056781 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:40.057587 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:40.259986 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:40.367603 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:40.504731 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:40.554717 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:40.555548 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:40.865174 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:41.008414 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:41.066529 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:41.067137 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:41.364946 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:41.503448 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:41.554634 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:41.555280 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0830 22:55:41.865367 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:42.004058 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:42.054389 1502803 kapi.go:107] duration metric: took 17.516743031s to wait for kubernetes.io/minikube-addons=registry ...
	I0830 22:55:42.055226 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:42.263412 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:42.368844 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:42.504232 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:42.554673 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:42.865052 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:43.003670 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:43.054907 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:43.370229 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:43.503472 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:43.554665 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:43.865429 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:44.003641 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:44.055440 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:44.366006 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:44.504007 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:44.555258 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:44.761035 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:44.866246 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:45.015130 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:45.055596 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:45.371284 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:45.505926 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:45.555614 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:45.873110 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:46.005039 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:46.056227 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:46.376572 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:46.504404 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:46.555533 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:46.770092 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:46.867921 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:47.003881 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:47.055769 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:47.365958 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:47.504032 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:47.554383 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:47.865563 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:48.004782 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:48.055322 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:48.366124 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:48.504010 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:48.554175 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:48.866298 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:49.004599 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:49.055569 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:49.260869 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:49.366654 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:49.504394 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:49.555882 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:49.866331 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:50.003694 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:50.056350 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:50.366407 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:50.504207 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:50.555067 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:50.868363 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:51.004150 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:51.055536 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:51.365680 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:51.503985 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:51.554482 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:51.760295 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:51.866024 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:52.004182 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:52.054922 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:52.366608 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:52.504623 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:52.557979 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:52.865992 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:53.004874 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:53.055168 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:53.365538 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:53.504153 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:53.554481 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:53.760790 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:53.866207 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:54.004107 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:54.054465 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:54.365218 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:54.503970 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:54.555003 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:54.865885 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:55.004565 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:55.054603 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:55.365743 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:55.503864 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:55.555902 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:55.865264 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:56.019420 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:56.054072 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:56.260438 1502803 pod_ready.go:102] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"False"
	I0830 22:55:56.375798 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:56.503955 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:56.554661 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:56.865706 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:57.009651 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:57.057286 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:57.275053 1502803 pod_ready.go:92] pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace has status "Ready":"True"
	I0830 22:55:57.275078 1502803 pod_ready.go:81] duration metric: took 28.533774001s waiting for pod "coredns-5dd5756b68-znm2g" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.275091 1502803 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-435384" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.280367 1502803 pod_ready.go:92] pod "etcd-addons-435384" in "kube-system" namespace has status "Ready":"True"
	I0830 22:55:57.280393 1502803 pod_ready.go:81] duration metric: took 5.294892ms waiting for pod "etcd-addons-435384" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.280404 1502803 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-435384" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.287647 1502803 pod_ready.go:92] pod "kube-apiserver-addons-435384" in "kube-system" namespace has status "Ready":"True"
	I0830 22:55:57.287670 1502803 pod_ready.go:81] duration metric: took 7.258812ms waiting for pod "kube-apiserver-addons-435384" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.287681 1502803 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-435384" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.298140 1502803 pod_ready.go:92] pod "kube-controller-manager-addons-435384" in "kube-system" namespace has status "Ready":"True"
	I0830 22:55:57.298166 1502803 pod_ready.go:81] duration metric: took 10.476694ms waiting for pod "kube-controller-manager-addons-435384" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.298177 1502803 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cx2zc" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.313138 1502803 pod_ready.go:92] pod "kube-proxy-cx2zc" in "kube-system" namespace has status "Ready":"True"
	I0830 22:55:57.313163 1502803 pod_ready.go:81] duration metric: took 14.978677ms waiting for pod "kube-proxy-cx2zc" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.313175 1502803 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-435384" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.365452 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:57.503574 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:57.554807 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:57.657030 1502803 pod_ready.go:92] pod "kube-scheduler-addons-435384" in "kube-system" namespace has status "Ready":"True"
	I0830 22:55:57.657054 1502803 pod_ready.go:81] duration metric: took 343.871599ms waiting for pod "kube-scheduler-addons-435384" in "kube-system" namespace to be "Ready" ...
	I0830 22:55:57.657063 1502803 pod_ready.go:38] duration metric: took 40.448074505s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 22:55:57.657082 1502803 api_server.go:52] waiting for apiserver process to appear ...
	I0830 22:55:57.657173 1502803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 22:55:57.676481 1502803 api_server.go:72] duration metric: took 40.967087579s to wait for apiserver process to appear ...
	I0830 22:55:57.676507 1502803 api_server.go:88] waiting for apiserver healthz status ...
	I0830 22:55:57.676524 1502803 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0830 22:55:57.686539 1502803 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0830 22:55:57.687804 1502803 api_server.go:141] control plane version: v1.28.1
	I0830 22:55:57.687830 1502803 api_server.go:131] duration metric: took 11.315823ms to wait for apiserver health ...
	I0830 22:55:57.687840 1502803 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 22:55:57.863195 1502803 system_pods.go:59] 16 kube-system pods found
	I0830 22:55:57.863229 1502803 system_pods.go:61] "coredns-5dd5756b68-znm2g" [807134bf-a88a-4262-a196-6cfaab288363] Running
	I0830 22:55:57.863239 1502803 system_pods.go:61] "csi-hostpath-attacher-0" [f3ca96e8-c41d-4828-a1a3-61779aaecdc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0830 22:55:57.863248 1502803 system_pods.go:61] "csi-hostpath-resizer-0" [4ee5b195-ea1f-4687-b7ad-ad1416f09d6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0830 22:55:57.863256 1502803 system_pods.go:61] "csi-hostpathplugin-hbqdz" [7686085f-0949-4d74-b8d2-78cf7d0ef272] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0830 22:55:57.863267 1502803 system_pods.go:61] "etcd-addons-435384" [5ea56f78-ecce-4555-815b-ee0f562f8504] Running
	I0830 22:55:57.863272 1502803 system_pods.go:61] "kube-apiserver-addons-435384" [86755693-8fbe-4a28-b44e-b4848b2611f5] Running
	I0830 22:55:57.863280 1502803 system_pods.go:61] "kube-controller-manager-addons-435384" [864f21b1-e95b-4f7d-b35b-a47cd5521a9b] Running
	I0830 22:55:57.863288 1502803 system_pods.go:61] "kube-ingress-dns-minikube" [2b0ba22a-07c7-431d-8ba7-225863342bc6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0830 22:55:57.863299 1502803 system_pods.go:61] "kube-proxy-cx2zc" [d96de144-3f3e-4c2d-950f-816758b59f31] Running
	I0830 22:55:57.863306 1502803 system_pods.go:61] "kube-scheduler-addons-435384" [bdcedba8-d970-4cd9-82cd-b50bfd686028] Running
	I0830 22:55:57.863315 1502803 system_pods.go:61] "metrics-server-7c66d45ddc-pdpjb" [cee10f1b-6f1c-408f-a38c-926f2b99bc1e] Running
	I0830 22:55:57.863320 1502803 system_pods.go:61] "registry-9hsgm" [927dfbd6-c866-43ed-9e9a-de2830d7ff89] Running
	I0830 22:55:57.863327 1502803 system_pods.go:61] "registry-proxy-w2rth" [7f082a39-dfa1-47b1-ae76-2bfc5efbea4d] Running
	I0830 22:55:57.863335 1502803 system_pods.go:61] "snapshot-controller-58dbcc7b99-dj2rw" [b32c5920-637e-4b69-8630-6e9e3406d60e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0830 22:55:57.863340 1502803 system_pods.go:61] "snapshot-controller-58dbcc7b99-znxgz" [69d2926f-e9d7-459c-8121-9da28ecdcae1] Running
	I0830 22:55:57.863345 1502803 system_pods.go:61] "storage-provisioner" [4f483826-6079-4870-83a4-98eb7560f9c7] Running
	I0830 22:55:57.863353 1502803 system_pods.go:74] duration metric: took 175.507667ms to wait for pod list to return data ...
	I0830 22:55:57.863368 1502803 default_sa.go:34] waiting for default service account to be created ...
	I0830 22:55:57.868785 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:58.003574 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:58.056895 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:58.057653 1502803 default_sa.go:45] found service account: "default"
	I0830 22:55:58.057675 1502803 default_sa.go:55] duration metric: took 194.300328ms for default service account to be created ...
	I0830 22:55:58.057685 1502803 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 22:55:58.263610 1502803 system_pods.go:86] 16 kube-system pods found
	I0830 22:55:58.263644 1502803 system_pods.go:89] "coredns-5dd5756b68-znm2g" [807134bf-a88a-4262-a196-6cfaab288363] Running
	I0830 22:55:58.263655 1502803 system_pods.go:89] "csi-hostpath-attacher-0" [f3ca96e8-c41d-4828-a1a3-61779aaecdc7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0830 22:55:58.263664 1502803 system_pods.go:89] "csi-hostpath-resizer-0" [4ee5b195-ea1f-4687-b7ad-ad1416f09d6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0830 22:55:58.263673 1502803 system_pods.go:89] "csi-hostpathplugin-hbqdz" [7686085f-0949-4d74-b8d2-78cf7d0ef272] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0830 22:55:58.263679 1502803 system_pods.go:89] "etcd-addons-435384" [5ea56f78-ecce-4555-815b-ee0f562f8504] Running
	I0830 22:55:58.263686 1502803 system_pods.go:89] "kube-apiserver-addons-435384" [86755693-8fbe-4a28-b44e-b4848b2611f5] Running
	I0830 22:55:58.263691 1502803 system_pods.go:89] "kube-controller-manager-addons-435384" [864f21b1-e95b-4f7d-b35b-a47cd5521a9b] Running
	I0830 22:55:58.263699 1502803 system_pods.go:89] "kube-ingress-dns-minikube" [2b0ba22a-07c7-431d-8ba7-225863342bc6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0830 22:55:58.263709 1502803 system_pods.go:89] "kube-proxy-cx2zc" [d96de144-3f3e-4c2d-950f-816758b59f31] Running
	I0830 22:55:58.263718 1502803 system_pods.go:89] "kube-scheduler-addons-435384" [bdcedba8-d970-4cd9-82cd-b50bfd686028] Running
	I0830 22:55:58.263726 1502803 system_pods.go:89] "metrics-server-7c66d45ddc-pdpjb" [cee10f1b-6f1c-408f-a38c-926f2b99bc1e] Running
	I0830 22:55:58.263734 1502803 system_pods.go:89] "registry-9hsgm" [927dfbd6-c866-43ed-9e9a-de2830d7ff89] Running
	I0830 22:55:58.263739 1502803 system_pods.go:89] "registry-proxy-w2rth" [7f082a39-dfa1-47b1-ae76-2bfc5efbea4d] Running
	I0830 22:55:58.263747 1502803 system_pods.go:89] "snapshot-controller-58dbcc7b99-dj2rw" [b32c5920-637e-4b69-8630-6e9e3406d60e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0830 22:55:58.263758 1502803 system_pods.go:89] "snapshot-controller-58dbcc7b99-znxgz" [69d2926f-e9d7-459c-8121-9da28ecdcae1] Running
	I0830 22:55:58.263763 1502803 system_pods.go:89] "storage-provisioner" [4f483826-6079-4870-83a4-98eb7560f9c7] Running
	I0830 22:55:58.263769 1502803 system_pods.go:126] duration metric: took 206.079281ms to wait for k8s-apps to be running ...
	I0830 22:55:58.263780 1502803 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 22:55:58.263836 1502803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 22:55:58.280737 1502803 system_svc.go:56] duration metric: took 16.947831ms WaitForService to wait for kubelet.
	I0830 22:55:58.280810 1502803 kubeadm.go:581] duration metric: took 41.571420746s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 22:55:58.280842 1502803 node_conditions.go:102] verifying NodePressure condition ...
	I0830 22:55:58.365728 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:58.457626 1502803 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 22:55:58.457655 1502803 node_conditions.go:123] node cpu capacity is 2
	I0830 22:55:58.457668 1502803 node_conditions.go:105] duration metric: took 176.806259ms to run NodePressure ...
	I0830 22:55:58.457680 1502803 start.go:228] waiting for startup goroutines ...
	I0830 22:55:58.504210 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:58.554851 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:58.865448 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:59.004404 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:59.055160 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:59.366055 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:55:59.503291 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:55:59.554776 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:55:59.869935 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:00.004225 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:00.054787 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:00.366161 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:00.504029 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:00.559946 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:00.865234 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:01.003522 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:01.055012 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:01.365844 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:01.504092 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:01.555916 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:01.865290 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:02.003951 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:02.054436 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:02.365939 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:02.504227 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:02.555209 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:02.865927 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:03.004316 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:03.056055 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:03.365516 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:03.506984 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:03.566162 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:03.865577 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:04.004250 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:04.055164 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:04.366030 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:04.503904 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:04.554356 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:04.869246 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:05.004915 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:05.055377 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:05.364935 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:05.503972 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:05.554255 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:05.873198 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:06.003917 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:06.055044 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:06.365557 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:06.504102 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:06.557051 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:06.867113 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:07.003626 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:07.054751 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:07.365158 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:07.503390 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:07.554629 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:07.865733 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:08.003756 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:08.055257 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:08.364915 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:08.503602 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:08.556181 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:08.865955 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:09.004029 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:09.054556 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:09.365819 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:09.503253 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:09.554742 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:09.865417 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:10.004175 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:10.054916 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:10.365533 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:10.504038 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:10.555062 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:10.865321 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:11.004488 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:11.054366 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:11.364957 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:11.503518 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:11.554873 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:11.866096 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:12.003990 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:12.059037 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:12.365288 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:12.504004 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:12.554259 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:12.866050 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:13.004068 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:13.054510 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:13.365208 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:13.503773 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:13.555295 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:13.866083 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:14.003887 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:14.055433 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:14.365822 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:14.503405 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:14.554768 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:14.865236 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0830 22:56:15.004025 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:15.054959 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:15.366251 1502803 kapi.go:107] duration metric: took 49.023547093s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0830 22:56:15.503708 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:15.554599 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:16.003362 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:16.054858 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:16.503761 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:16.554984 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:17.003823 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:17.054702 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:17.503658 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:17.555109 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:18.005365 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:18.054937 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:18.504045 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:18.556017 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:19.004146 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:19.055038 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:19.503991 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:19.554538 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:20.003706 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:20.054614 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:20.503316 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:20.555078 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:21.004154 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:21.054703 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:21.503382 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:21.554580 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:22.003369 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:22.055296 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:22.504002 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:22.554158 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:23.004035 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:23.054878 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:23.503325 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:23.555386 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:24.003565 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:24.054549 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:24.503478 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:24.555172 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:25.003930 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:25.054216 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:25.504088 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:25.554476 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:26.003330 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:26.055008 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:26.504145 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:26.555255 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:27.003470 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:27.055181 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:27.504163 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:27.554655 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:28.003473 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:28.055024 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:28.504276 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:28.562000 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:29.003872 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:29.055231 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:29.503941 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:29.556350 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:30.004078 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:30.055331 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:30.503822 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:30.555123 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:31.003909 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:31.054583 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:31.503377 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:31.554808 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:32.003825 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:32.054963 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:32.503599 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:32.554756 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:33.004484 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:33.055099 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:33.504415 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:33.555786 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:34.003541 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:34.055447 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:34.503168 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:34.555227 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:35.004144 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:35.054864 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:35.503962 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:35.554237 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:36.004126 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:36.054556 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:36.503473 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:36.555048 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:37.006523 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:37.055804 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:37.503872 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:37.555176 1502803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0830 22:56:38.004646 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:38.058554 1502803 kapi.go:107] duration metric: took 1m13.525820395s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0830 22:56:38.504542 1502803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0830 22:56:39.003355 1502803 kapi.go:107] duration metric: took 1m11.012492781s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0830 22:56:39.005830 1502803 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-435384 cluster.
	I0830 22:56:39.007701 1502803 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0830 22:56:39.009450 1502803 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0830 22:56:39.011439 1502803 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, default-storageclass, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0830 22:56:39.013609 1502803 addons.go:502] enable addons completed in 1m22.44064058s: enabled=[cloud-spanner ingress-dns storage-provisioner inspektor-gadget default-storageclass metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0830 22:56:39.013682 1502803 start.go:233] waiting for cluster config update ...
	I0830 22:56:39.013714 1502803 start.go:242] writing updated cluster config ...
	I0830 22:56:39.014054 1502803 ssh_runner.go:195] Run: rm -f paused
	I0830 22:56:39.077107 1502803 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0830 22:56:39.079637 1502803 out.go:177] * Done! kubectl is now configured to use "addons-435384" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Aug 30 22:57:28 addons-435384 cri-dockerd[1306]: time="2023-08-30T22:57:28Z" level=info msg="Stop pulling image gcr.io/google-samples/hello-app:1.0: Status: Downloaded newer image for gcr.io/google-samples/hello-app:1.0"
	Aug 30 22:57:28 addons-435384 dockerd[1096]: time="2023-08-30T22:57:28.996099260Z" level=info msg="ignoring event" container=1adc6c601a2a02d2d15482cba9aa299253e27f26661f3a0982961c306f0d01b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:30 addons-435384 dockerd[1096]: time="2023-08-30T22:57:30.250440063Z" level=info msg="ignoring event" container=0f90653ebff2212a1baf776369ef443d4b2dbb56e383bb65bb760e3547e31d84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:30 addons-435384 dockerd[1096]: time="2023-08-30T22:57:30.307934866Z" level=info msg="ignoring event" container=1105d95d45cc4c30fb3d43465399c2c421cbe2a5cc4c7584c04ba5cce0f1e393 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:30 addons-435384 dockerd[1096]: time="2023-08-30T22:57:30.427537923Z" level=info msg="ignoring event" container=7df37ccc2259b91879f2323a66528cdf1b4d89ac30b19e70a1bb8a537b4e3720 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:41 addons-435384 cri-dockerd[1306]: time="2023-08-30T22:57:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/96f6f992139c89c0d8ec405f8b1f80fae63314e6afdcd27f05f7dd214178c9a7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 30 22:57:41 addons-435384 cri-dockerd[1306]: time="2023-08-30T22:57:41Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Aug 30 22:57:42 addons-435384 dockerd[1096]: time="2023-08-30T22:57:42.392156839Z" level=info msg="ignoring event" container=ac442aaba3039065296c6cfe01f06768909193c9e4fc5fa82fcb3dd733addc02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:44 addons-435384 dockerd[1096]: time="2023-08-30T22:57:44.125849975Z" level=info msg="ignoring event" container=eb88733be9bd8ea780460bf419b17e15560ea60c0ca8ea7c980dd20c8173688d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:47 addons-435384 dockerd[1096]: time="2023-08-30T22:57:47.053747995Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=b48bce5c9dab531e5917951014e1abd74bb04b0a0d0fca0a6ea9fc53513ff2ee
	Aug 30 22:57:47 addons-435384 dockerd[1096]: time="2023-08-30T22:57:47.133666046Z" level=info msg="ignoring event" container=b48bce5c9dab531e5917951014e1abd74bb04b0a0d0fca0a6ea9fc53513ff2ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:47 addons-435384 dockerd[1096]: time="2023-08-30T22:57:47.274118126Z" level=info msg="ignoring event" container=48e732ca8b3a10f9776f7703af3d90f06f5dbc1b13369cd2bc1342ef7875f864 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:49 addons-435384 dockerd[1096]: time="2023-08-30T22:57:49.171601844Z" level=info msg="ignoring event" container=23490362c8e70350015edeabed7b3aaaa85132da0afd6372449e1f844a5e2c8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:49 addons-435384 dockerd[1096]: time="2023-08-30T22:57:49.296401179Z" level=info msg="ignoring event" container=96f6f992139c89c0d8ec405f8b1f80fae63314e6afdcd27f05f7dd214178c9a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.047857542Z" level=info msg="ignoring event" container=c45c1cc8070d4995514a83a1e774f13173b675abc54f1b1bd4f425e78cb8d226 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.076312633Z" level=info msg="ignoring event" container=60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.093281175Z" level=info msg="ignoring event" container=b9923fe60f7b8b686d9dd196c990c1f4a0cc1a9137b0f196f377dc9312771565 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.096135563Z" level=info msg="ignoring event" container=ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.114625400Z" level=info msg="ignoring event" container=09c8e3327294c2f68047c24263cb59f2a4f64d5ee2466b68fc2959a09bc3db63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.114675533Z" level=info msg="ignoring event" container=375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.114699090Z" level=info msg="ignoring event" container=e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.147228604Z" level=info msg="ignoring event" container=7acee8f75f9e3b06d37acf7572b66870fbb84e461c2b77d6ea15cae7226efa46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.443189384Z" level=info msg="ignoring event" container=083febc74cf26597faf9fe9cdce4ce06d740cf35ac249343c302f0a72c6045a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.550382762Z" level=info msg="ignoring event" container=198919c40fdfc50bbd48be3bbafbf5edfb515906c6da29ec3bcfc115d88fd903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 22:57:51 addons-435384 dockerd[1096]: time="2023-08-30T22:57:51.596173624Z" level=info msg="ignoring event" container=3f76742ebfe05e60e4b5be9df31bdf88583a1b16126f22705a878316dc7720e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	eb88733be9bd8       13753a81eccfd                                                                                                                9 seconds ago        Exited              hello-world-app              2                   d91ba84a41087       hello-world-app-5d77478584-9wxkb
	2d6bd43e93ddb       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                                35 seconds ago       Running             nginx                        0                   3ac839835aa33       nginx
	9198d38f941ae       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98                        About a minute ago   Running             headlamp                     0                   55213a0342b31       headlamp-699c48fb74-h4ng7
	f151f2a804448       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                     0                   1162a8a2bf25c       gcp-auth-d4c87556c-qvmj8
	7acee8f75f9e3       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7              About a minute ago   Exited              csi-resizer                  0                   198919c40fdfc       csi-hostpath-resizer-0
	383edb52bad73       8f2588812ab29                                                                                                                About a minute ago   Exited              patch                        1                   ae382a8662a49       ingress-nginx-admission-patch-n2hlz
	667aa17dcdd5c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   About a minute ago   Exited              create                       0                   2626ea987154b       ingress-nginx-admission-create-hlrpx
	6126a17f8bbe7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      About a minute ago   Running             volume-snapshot-controller   0                   6ff73837f1038       snapshot-controller-58dbcc7b99-dj2rw
	a8c02cc337f5c       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      About a minute ago   Running             volume-snapshot-controller   0                   d49eade5a721b       snapshot-controller-58dbcc7b99-znxgz
	de977636c1b37       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner          0                   619dae82a9062       storage-provisioner
	a8e21052e3e8c       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                      0                   db7e6c2f134a6       coredns-5dd5756b68-znm2g
	e815bc7aef398       812f5241df7fd                                                                                                                2 minutes ago        Running             kube-proxy                   0                   653a440f3f3a8       kube-proxy-cx2zc
	66475eda59653       b29fb62480892                                                                                                                2 minutes ago        Running             kube-apiserver               0                   cd121e9e2cf29       kube-apiserver-addons-435384
	8d8f7647a96a3       8b6e1980b7584                                                                                                                2 minutes ago        Running             kube-controller-manager      0                   f053549594f16       kube-controller-manager-addons-435384
	48844196d98a8       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                         0                   8b8a48bebe2fe       etcd-addons-435384
	8fb7cfcbf5fe2       b4a5a57e99492                                                                                                                2 minutes ago        Running             kube-scheduler               0                   1ce4920fb327d       kube-scheduler-addons-435384
	
	* 
	* ==> coredns [a8e21052e3e8] <==
	* [INFO] 10.244.0.17:49040 - 45810 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000099906s
	[INFO] 10.244.0.17:49040 - 7116 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001210599s
	[INFO] 10.244.0.17:42918 - 33500 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00219116s
	[INFO] 10.244.0.17:42918 - 21669 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001439539s
	[INFO] 10.244.0.17:49040 - 30001 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001991284s
	[INFO] 10.244.0.17:49040 - 34678 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135524s
	[INFO] 10.244.0.17:42918 - 9379 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073017s
	[INFO] 10.244.0.17:46648 - 25541 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00012498s
	[INFO] 10.244.0.17:33042 - 31141 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051979s
	[INFO] 10.244.0.17:33042 - 38923 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062063s
	[INFO] 10.244.0.17:33042 - 16168 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053333s
	[INFO] 10.244.0.17:33042 - 58679 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053415s
	[INFO] 10.244.0.17:33042 - 28493 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000111942s
	[INFO] 10.244.0.17:33042 - 64469 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039745s
	[INFO] 10.244.0.17:33042 - 41243 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001091764s
	[INFO] 10.244.0.17:46648 - 16695 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058093s
	[INFO] 10.244.0.17:46648 - 34315 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006999s
	[INFO] 10.244.0.17:33042 - 63311 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000900675s
	[INFO] 10.244.0.17:46648 - 3467 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000088123s
	[INFO] 10.244.0.17:33042 - 47622 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047483s
	[INFO] 10.244.0.17:46648 - 7968 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080648s
	[INFO] 10.244.0.17:46648 - 33916 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00007854s
	[INFO] 10.244.0.17:46648 - 55237 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001035183s
	[INFO] 10.244.0.17:46648 - 60853 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000808368s
	[INFO] 10.244.0.17:46648 - 12899 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045169s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-435384
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-435384
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=addons-435384
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T22_55_03_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-435384
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 22:55:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-435384
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 22:57:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 22:57:37 +0000   Wed, 30 Aug 2023 22:54:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 22:57:37 +0000   Wed, 30 Aug 2023 22:54:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 22:57:37 +0000   Wed, 30 Aug 2023 22:54:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 22:57:37 +0000   Wed, 30 Aug 2023 22:55:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-435384
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 017586e5bdb94e09a7584c24bdaf12e3
	  System UUID:                678d0447-a035-4091-b80a-89886fbe72f3
	  Boot ID:                    b8a33901-d088-4f70-8e50-554d8f07ad5d
	  Kernel Version:             5.15.0-1043-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-9wxkb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  gcp-auth                    gcp-auth-d4c87556c-qvmj8                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  headlamp                    headlamp-699c48fb74-h4ng7                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 coredns-5dd5756b68-znm2g                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m36s
	  kube-system                 etcd-addons-435384                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m48s
	  kube-system                 kube-apiserver-addons-435384             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 kube-controller-manager-addons-435384    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 kube-proxy-cx2zc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 kube-scheduler-addons-435384             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 snapshot-controller-58dbcc7b99-dj2rw     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 snapshot-controller-58dbcc7b99-znxgz     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m34s                  kube-proxy       
	  Normal  Starting                 2m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m57s (x8 over 2m57s)  kubelet          Node addons-435384 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s (x8 over 2m57s)  kubelet          Node addons-435384 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s (x7 over 2m57s)  kubelet          Node addons-435384 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m49s                  kubelet          Node addons-435384 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m49s                  kubelet          Node addons-435384 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m49s                  kubelet          Node addons-435384 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m49s                  kubelet          Node addons-435384 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m38s                  kubelet          Node addons-435384 status is now: NodeReady
	  Normal  RegisteredNode           2m36s                  node-controller  Node addons-435384 event: Registered Node addons-435384 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000916] FS-Cache: N-cookie d=0000000095b1c4f2{9p.inode} n=0000000063aceb57
	[  +0.001072] FS-Cache: N-key=[8] 'f3623b0000000000'
	[  +0.005581] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000942] FS-Cache: O-cookie d=0000000095b1c4f2{9p.inode} n=000000005b371b38
	[  +0.001073] FS-Cache: O-key=[8] 'f3623b0000000000'
	[  +0.000707] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=0000000095b1c4f2{9p.inode} n=0000000025d35af9
	[  +0.001038] FS-Cache: N-key=[8] 'f3623b0000000000'
	[  +2.308753] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000962] FS-Cache: O-cookie d=0000000095b1c4f2{9p.inode} n=00000000242ec187
	[  +0.001046] FS-Cache: O-key=[8] 'f2623b0000000000'
	[  +0.000695] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=0000000095b1c4f2{9p.inode} n=0000000063aceb57
	[  +0.001035] FS-Cache: N-key=[8] 'f2623b0000000000'
	[  +0.404842] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000997] FS-Cache: O-cookie d=0000000095b1c4f2{9p.inode} n=00000000f2440b70
	[  +0.001044] FS-Cache: O-key=[8] 'f8623b0000000000'
	[  +0.000699] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000923] FS-Cache: N-cookie d=0000000095b1c4f2{9p.inode} n=000000006bfb84c0
	[  +0.001037] FS-Cache: N-key=[8] 'f8623b0000000000'
	[Aug30 22:26] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Aug30 22:35] systemd-journald[223]: Failed to send stream file descriptor to service manager: Connection refused
	
	* 
	* ==> etcd [48844196d98a] <==
	* {"level":"info","ts":"2023-08-30T22:54:56.506744Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-30T22:54:56.507251Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-08-30T22:54:56.507306Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-30T22:54:56.507327Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-30T22:54:56.507344Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-30T22:54:56.507655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-08-30T22:54:56.507749Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-08-30T22:54:57.185199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-30T22:54:57.185431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-30T22:54:57.185545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-08-30T22:54:57.185682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-08-30T22:54:57.185774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-30T22:54:57.185875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-08-30T22:54:57.185997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-30T22:54:57.189297Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-435384 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-30T22:54:57.189591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:54:57.190745Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-30T22:54:57.193226Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:54:57.197539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:54:57.197766Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:54:57.197876Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-30T22:54:57.198412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-30T22:54:57.199441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-30T22:54:57.217183Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-30T22:54:57.217223Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [f151f2a80444] <==
	* 2023/08/30 22:56:38 GCP Auth Webhook started!
	2023/08/30 22:56:46 Ready to marshal response ...
	2023/08/30 22:56:46 Ready to write response ...
	2023/08/30 22:56:46 Ready to marshal response ...
	2023/08/30 22:56:46 Ready to write response ...
	2023/08/30 22:56:46 Ready to marshal response ...
	2023/08/30 22:56:46 Ready to write response ...
	2023/08/30 22:56:49 Ready to marshal response ...
	2023/08/30 22:56:49 Ready to write response ...
	2023/08/30 22:57:14 Ready to marshal response ...
	2023/08/30 22:57:14 Ready to write response ...
	2023/08/30 22:57:14 Ready to marshal response ...
	2023/08/30 22:57:14 Ready to write response ...
	2023/08/30 22:57:26 Ready to marshal response ...
	2023/08/30 22:57:26 Ready to write response ...
	2023/08/30 22:57:40 Ready to marshal response ...
	2023/08/30 22:57:40 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  22:57:52 up  7:40,  0 users,  load average: 2.73, 3.13, 3.45
	Linux addons-435384 5.15.0-1043-aws #48~20.04.1-Ubuntu SMP Wed Aug 16 18:32:42 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [66475eda5965] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0830 22:55:27.842597       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.109.111.94"}
	W0830 22:55:52.706255       1 handler_proxy.go:93] no RequestInfo found in the context
	E0830 22:55:52.706629       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0830 22:55:52.706554       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.139.251:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.139.251:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.139.251:443: connect: connection refused
	I0830 22:55:52.706958       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.100.139.251:443: connect: connection refused
	I0830 22:55:52.706970       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0830 22:55:52.710296       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.139.251:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.139.251:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.139.251:443: connect: connection refused
	E0830 22:55:52.715581       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.139.251:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.139.251:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.139.251:443: connect: connection refused
	I0830 22:55:52.855103       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0830 22:56:00.524209       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0830 22:56:46.337496       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.229.35"}
	E0830 22:56:54.352854       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400d7eadb0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400e4ac7d0), ResponseWriter:(*httpsnoop.rw)(0x400e4ac7d0), Flusher:(*httpsnoop.rw)(0x400e4ac7d0), CloseNotifier:(*httpsnoop.rw)(0x400e4ac7d0), Pusher:(*httpsnoop.rw)(0x400e4ac7d0)}}, encoder:(*versioning.codec)(0x4009040280), memAllocator:(*runtime.Allocator)(0x4007623c98)})
	I0830 22:57:00.524706       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0830 22:57:01.958183       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0830 22:57:01.973492       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	E0830 22:57:02.012269       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	E0830 22:57:02.012299       1 controller.go:159] removing "v1alpha1.gadget.kinvolk.io" from AggregationController failed with: resource not found
	W0830 22:57:03.002674       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0830 22:57:12.839563       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	I0830 22:57:14.062218       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0830 22:57:14.542393       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.139.146"}
	I0830 22:57:26.349440       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.53.98"}
	I0830 22:57:28.717317       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [8d8f7647a96a] <==
	* I0830 22:57:13.056424       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0830 22:57:16.477712       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0830 22:57:16.477751       1 shared_informer.go:318] Caches are synced for resource quota
	I0830 22:57:16.834749       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0830 22:57:16.834799       1 shared_informer.go:318] Caches are synced for garbage collector
	W0830 22:57:22.972241       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 22:57:22.972276       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0830 22:57:26.092120       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0830 22:57:26.126794       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-9wxkb"
	I0830 22:57:26.169823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="78.249704ms"
	I0830 22:57:26.185974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="16.099792ms"
	I0830 22:57:26.186276       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="260.759µs"
	I0830 22:57:30.025162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="99.774µs"
	I0830 22:57:31.057823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.046µs"
	I0830 22:57:31.514949       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0830 22:57:32.105962       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="86.949µs"
	I0830 22:57:39.874621       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0830 22:57:43.382234       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0830 22:57:43.382265       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0830 22:57:44.003995       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0830 22:57:44.014263       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5dcd45b5bf" duration="7.622µs"
	I0830 22:57:44.019240       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0830 22:57:44.389696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="76.357µs"
	I0830 22:57:50.860917       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0830 22:57:50.942611       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	
	* 
	* ==> kube-proxy [e815bc7aef39] <==
	* I0830 22:55:18.558800       1 server_others.go:69] "Using iptables proxy"
	I0830 22:55:18.582379       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0830 22:55:18.621599       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0830 22:55:18.625060       1 server_others.go:152] "Using iptables Proxier"
	I0830 22:55:18.625111       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0830 22:55:18.625157       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0830 22:55:18.625222       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0830 22:55:18.625563       1 server.go:846] "Version info" version="v1.28.1"
	I0830 22:55:18.625575       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0830 22:55:18.629684       1 config.go:188] "Starting service config controller"
	I0830 22:55:18.629719       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0830 22:55:18.629741       1 config.go:97] "Starting endpoint slice config controller"
	I0830 22:55:18.629745       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0830 22:55:18.635250       1 config.go:315] "Starting node config controller"
	I0830 22:55:18.635284       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0830 22:55:18.730516       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0830 22:55:18.730591       1 shared_informer.go:318] Caches are synced for service config
	I0830 22:55:18.735388       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8fb7cfcbf5fe] <==
	* W0830 22:55:00.738472       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 22:55:00.738492       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0830 22:55:00.738542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 22:55:00.738558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0830 22:55:00.738595       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 22:55:00.738620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0830 22:55:00.738666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0830 22:55:00.738692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0830 22:55:00.741516       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 22:55:00.741640       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0830 22:55:00.741771       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 22:55:00.741848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0830 22:55:00.741818       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 22:55:00.741937       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0830 22:55:01.572094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 22:55:01.572128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0830 22:55:01.610899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0830 22:55:01.611153       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0830 22:55:01.695652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 22:55:01.695862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0830 22:55:01.715365       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 22:55:01.715589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0830 22:55:01.753403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 22:55:01.753650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0830 22:55:02.422668       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.010139    2305 scope.go:117] "RemoveContainer" containerID="ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.010875    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa"} err="failed to get container status \"ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa\": rpc error: code = Unknown desc = Error response from daemon: No such container: ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.010915    2305 scope.go:117] "RemoveContainer" containerID="60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.011512    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30"} err="failed to get container status \"60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30\": rpc error: code = Unknown desc = Error response from daemon: No such container: 60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.011548    2305 scope.go:117] "RemoveContainer" containerID="375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.012120    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d"} err="failed to get container status \"375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d\": rpc error: code = Unknown desc = Error response from daemon: No such container: 375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.012158    2305 scope.go:117] "RemoveContainer" containerID="e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.014891    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7"} err="failed to get container status \"e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7\": rpc error: code = Unknown desc = Error response from daemon: No such container: e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.014949    2305 scope.go:117] "RemoveContainer" containerID="b9923fe60f7b8b686d9dd196c990c1f4a0cc1a9137b0f196f377dc9312771565"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.015609    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b9923fe60f7b8b686d9dd196c990c1f4a0cc1a9137b0f196f377dc9312771565"} err="failed to get container status \"b9923fe60f7b8b686d9dd196c990c1f4a0cc1a9137b0f196f377dc9312771565\": rpc error: code = Unknown desc = Error response from daemon: No such container: b9923fe60f7b8b686d9dd196c990c1f4a0cc1a9137b0f196f377dc9312771565"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.015637    2305 scope.go:117] "RemoveContainer" containerID="09c8e3327294c2f68047c24263cb59f2a4f64d5ee2466b68fc2959a09bc3db63"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.016368    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"09c8e3327294c2f68047c24263cb59f2a4f64d5ee2466b68fc2959a09bc3db63"} err="failed to get container status \"09c8e3327294c2f68047c24263cb59f2a4f64d5ee2466b68fc2959a09bc3db63\": rpc error: code = Unknown desc = Error response from daemon: No such container: 09c8e3327294c2f68047c24263cb59f2a4f64d5ee2466b68fc2959a09bc3db63"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.016405    2305 scope.go:117] "RemoveContainer" containerID="ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.016989    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa"} err="failed to get container status \"ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa\": rpc error: code = Unknown desc = Error response from daemon: No such container: ea66330f70589267b91ac25f0c851362241313005aada832c713b65558aa4eaa"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.017014    2305 scope.go:117] "RemoveContainer" containerID="60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.017745    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30"} err="failed to get container status \"60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30\": rpc error: code = Unknown desc = Error response from daemon: No such container: 60fca69eced907f7788d81935805ca5cd354f87ded783acecdbe6e0607e7ed30"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.017787    2305 scope.go:117] "RemoveContainer" containerID="375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.018339    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d"} err="failed to get container status \"375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d\": rpc error: code = Unknown desc = Error response from daemon: No such container: 375e7cbb8f002b3ec52ab9907534c6914edc80a648ec5ce376e47656f50a189d"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.018375    2305 scope.go:117] "RemoveContainer" containerID="e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.019021    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7"} err="failed to get container status \"e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7\": rpc error: code = Unknown desc = Error response from daemon: No such container: e8ba2c505d2fd3d8d328a789d43b4587908e64736908391f91a548121eb41bc7"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.019058    2305 scope.go:117] "RemoveContainer" containerID="c45c1cc8070d4995514a83a1e774f13173b675abc54f1b1bd4f425e78cb8d226"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.035355    2305 scope.go:117] "RemoveContainer" containerID="c45c1cc8070d4995514a83a1e774f13173b675abc54f1b1bd4f425e78cb8d226"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: E0830 22:57:52.036268    2305 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c45c1cc8070d4995514a83a1e774f13173b675abc54f1b1bd4f425e78cb8d226" containerID="c45c1cc8070d4995514a83a1e774f13173b675abc54f1b1bd4f425e78cb8d226"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.036326    2305 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c45c1cc8070d4995514a83a1e774f13173b675abc54f1b1bd4f425e78cb8d226"} err="failed to get container status \"c45c1cc8070d4995514a83a1e774f13173b675abc54f1b1bd4f425e78cb8d226\": rpc error: code = Unknown desc = Error response from daemon: No such container: c45c1cc8070d4995514a83a1e774f13173b675abc54f1b1bd4f425e78cb8d226"
	Aug 30 22:57:52 addons-435384 kubelet[2305]: I0830 22:57:52.732125    2305 scope.go:117] "RemoveContainer" containerID="7acee8f75f9e3b06d37acf7572b66870fbb84e461c2b77d6ea15cae7226efa46"
	
	* 
	* ==> storage-provisioner [de977636c1b3] <==
	* I0830 22:55:22.152052       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 22:55:22.187207       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 22:55:22.187297       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 22:55:22.199044       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 22:55:22.199336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-435384_283f5feb-72e3-4faa-9697-2988536dbb1c!
	I0830 22:55:22.199463       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b2af526d-1ccb-44fc-a725-4b36f0d131bc", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-435384_283f5feb-72e3-4faa-9697-2988536dbb1c became leader
	I0830 22:55:22.300328       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-435384_283f5feb-72e3-4faa-9697-2988536dbb1c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-435384 -n addons-435384
helpers_test.go:261: (dbg) Run:  kubectl --context addons-435384 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (40.62s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-211142 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-211142 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.372583684s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-211142 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-211142 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2b49551d-7330-453d-81a0-6bf67c81cb26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2b49551d-7330-453d-81a0-6bf67c81cb26] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.025643766s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-211142 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-211142 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-211142 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.022901422s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-211142 addons disable ingress-dns --alsologtostderr -v=1
E0830 23:06:39.102838 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-211142 addons disable ingress-dns --alsologtostderr -v=1: (11.544146878s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-211142 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-211142 addons disable ingress --alsologtostderr -v=1: (7.51820723s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-211142
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-211142:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f6e1a9699475726f3dd840011263d77970679b439d6e38e1a17a20edb4473bdf",
	        "Created": "2023-08-30T23:04:17.226572695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1548303,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-30T23:04:17.540958075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:879c6efc994c345ac84dd4ebb4fc5b49dd2a4b340e335879382e51233f79b51a",
	        "ResolvConfPath": "/var/lib/docker/containers/f6e1a9699475726f3dd840011263d77970679b439d6e38e1a17a20edb4473bdf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f6e1a9699475726f3dd840011263d77970679b439d6e38e1a17a20edb4473bdf/hostname",
	        "HostsPath": "/var/lib/docker/containers/f6e1a9699475726f3dd840011263d77970679b439d6e38e1a17a20edb4473bdf/hosts",
	        "LogPath": "/var/lib/docker/containers/f6e1a9699475726f3dd840011263d77970679b439d6e38e1a17a20edb4473bdf/f6e1a9699475726f3dd840011263d77970679b439d6e38e1a17a20edb4473bdf-json.log",
	        "Name": "/ingress-addon-legacy-211142",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-211142:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-211142",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6f3176b0d6a0c21061e454a6aef381b74ea4a3e79e574883b2176943ff4c45b1-init/diff:/var/lib/docker/overlay2/ef055cb4b9f7ea74c3fdc71828094f56839d9c7e7022b41a5ab3cc1d5d79c8a3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f3176b0d6a0c21061e454a6aef381b74ea4a3e79e574883b2176943ff4c45b1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f3176b0d6a0c21061e454a6aef381b74ea4a3e79e574883b2176943ff4c45b1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f3176b0d6a0c21061e454a6aef381b74ea4a3e79e574883b2176943ff4c45b1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-211142",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-211142/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-211142",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-211142",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-211142",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb7242c039f5a5819e1c3f921e2fab3af48a98bb58479a58563ef304cd4d5f55",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bb7242c039f5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-211142": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f6e1a9699475",
	                        "ingress-addon-legacy-211142"
	                    ],
	                    "NetworkID": "dd38e6058a0a00205b1844c78da0d4861881bc921b827b692d35ea1da21b0ad0",
	                    "EndpointID": "e94d616b128633513b5e2d9701a189a76f0a74b4bbc0e58471aba58a04a16a36",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-211142 -n ingress-addon-legacy-211142
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-211142 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-211142 logs -n 25: (1.015497187s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-489151                     | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC |                     |
	|                | --kill=true                              |                             |         |         |                     |                     |
	| update-context | functional-489151                        | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-489151                        | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-489151                        | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-489151                        | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-489151                        | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-489151 ssh pgrep              | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-489151 image build -t         | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | localhost/my-image:functional-489151     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-489151 image ls               | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	| image          | functional-489151                        | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-489151                        | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-489151                     | functional-489151           | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	| start          | -p image-693828                          | image-693828                | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | --driver=docker                          |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-693828                | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-693828                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-693828                | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-693828                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-693828                | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-693828                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-693828                | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:03 UTC |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-693828                          |                             |         |         |                     |                     |
	| delete         | -p image-693828                          | image-693828                | jenkins | v1.31.2 | 30 Aug 23 23:03 UTC | 30 Aug 23 23:04 UTC |
	| start          | -p ingress-addon-legacy-211142           | ingress-addon-legacy-211142 | jenkins | v1.31.2 | 30 Aug 23 23:04 UTC | 30 Aug 23 23:05 UTC |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                     |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-211142              | ingress-addon-legacy-211142 | jenkins | v1.31.2 | 30 Aug 23 23:05 UTC | 30 Aug 23 23:06 UTC |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-211142              | ingress-addon-legacy-211142 | jenkins | v1.31.2 | 30 Aug 23 23:06 UTC | 30 Aug 23 23:06 UTC |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-211142              | ingress-addon-legacy-211142 | jenkins | v1.31.2 | 30 Aug 23 23:06 UTC | 30 Aug 23 23:06 UTC |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-211142 ip           | ingress-addon-legacy-211142 | jenkins | v1.31.2 | 30 Aug 23 23:06 UTC | 30 Aug 23 23:06 UTC |
	| addons         | ingress-addon-legacy-211142              | ingress-addon-legacy-211142 | jenkins | v1.31.2 | 30 Aug 23 23:06 UTC | 30 Aug 23 23:06 UTC |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-211142              | ingress-addon-legacy-211142 | jenkins | v1.31.2 | 30 Aug 23 23:06 UTC | 30 Aug 23 23:06 UTC |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 23:04:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 23:04:01.675868 1547838 out.go:296] Setting OutFile to fd 1 ...
	I0830 23:04:01.676040 1547838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:04:01.676051 1547838 out.go:309] Setting ErrFile to fd 2...
	I0830 23:04:01.676056 1547838 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:04:01.676335 1547838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
	I0830 23:04:01.676739 1547838 out.go:303] Setting JSON to false
	I0830 23:04:01.677744 1547838 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27978,"bootTime":1693408664,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0830 23:04:01.677808 1547838 start.go:138] virtualization:  
	I0830 23:04:01.680692 1547838 out.go:177] * [ingress-addon-legacy-211142] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 23:04:01.683633 1547838 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 23:04:01.685775 1547838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 23:04:01.683779 1547838 notify.go:220] Checking for updates...
	I0830 23:04:01.689920 1547838 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	I0830 23:04:01.692239 1547838 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	I0830 23:04:01.694114 1547838 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 23:04:01.696155 1547838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 23:04:01.698468 1547838 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 23:04:01.722253 1547838 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 23:04:01.722348 1547838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 23:04:01.816273 1547838 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-30 23:04:01.806199185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 23:04:01.816382 1547838 docker.go:294] overlay module found
	I0830 23:04:01.818675 1547838 out.go:177] * Using the docker driver based on user configuration
	I0830 23:04:01.820615 1547838 start.go:298] selected driver: docker
	I0830 23:04:01.820643 1547838 start.go:902] validating driver "docker" against <nil>
	I0830 23:04:01.820656 1547838 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 23:04:01.821300 1547838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 23:04:01.886772 1547838 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-30 23:04:01.87688735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 23:04:01.886925 1547838 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 23:04:01.887144 1547838 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0830 23:04:01.889626 1547838 out.go:177] * Using Docker driver with root privileges
	I0830 23:04:01.891934 1547838 cni.go:84] Creating CNI manager for ""
	I0830 23:04:01.891958 1547838 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0830 23:04:01.891972 1547838 start_flags.go:319] config:
	{Name:ingress-addon-legacy-211142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 23:04:01.894518 1547838 out.go:177] * Starting control plane node ingress-addon-legacy-211142 in cluster ingress-addon-legacy-211142
	I0830 23:04:01.896367 1547838 cache.go:122] Beginning downloading kic base image for docker with docker
	I0830 23:04:01.898192 1547838 out.go:177] * Pulling base image ...
	I0830 23:04:01.900076 1547838 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0830 23:04:01.900169 1547838 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local docker daemon
	I0830 23:04:01.916782 1547838 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local docker daemon, skipping pull
	I0830 23:04:01.916801 1547838 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec exists in daemon, skipping load
	I0830 23:04:01.973434 1547838 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0830 23:04:01.973459 1547838 cache.go:57] Caching tarball of preloaded images
	I0830 23:04:01.973624 1547838 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0830 23:04:01.975720 1547838 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0830 23:04:01.977618 1547838 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0830 23:04:02.093805 1547838 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0830 23:04:10.095324 1547838 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0830 23:04:10.095453 1547838 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0830 23:04:11.132723 1547838 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0830 23:04:11.133090 1547838 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/config.json ...
	I0830 23:04:11.133143 1547838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/config.json: {Name:mk5b75fc6d8352c24ce19e02bc6f3cb90accccd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:04:11.133345 1547838 cache.go:195] Successfully downloaded all kic artifacts
	I0830 23:04:11.133399 1547838 start.go:365] acquiring machines lock for ingress-addon-legacy-211142: {Name:mk84ebc2ad22c477a8136bf59e418c0d296ccce8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0830 23:04:11.133459 1547838 start.go:369] acquired machines lock for "ingress-addon-legacy-211142" in 48.837µs
	I0830 23:04:11.133480 1547838 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-211142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211142 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0830 23:04:11.133548 1547838 start.go:125] createHost starting for "" (driver="docker")
	I0830 23:04:11.135749 1547838 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0830 23:04:11.135957 1547838 start.go:159] libmachine.API.Create for "ingress-addon-legacy-211142" (driver="docker")
	I0830 23:04:11.135980 1547838 client.go:168] LocalClient.Create starting
	I0830 23:04:11.136033 1547838 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem
	I0830 23:04:11.136069 1547838 main.go:141] libmachine: Decoding PEM data...
	I0830 23:04:11.136084 1547838 main.go:141] libmachine: Parsing certificate...
	I0830 23:04:11.136148 1547838 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem
	I0830 23:04:11.136169 1547838 main.go:141] libmachine: Decoding PEM data...
	I0830 23:04:11.136181 1547838 main.go:141] libmachine: Parsing certificate...
	I0830 23:04:11.136531 1547838 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-211142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0830 23:04:11.155130 1547838 cli_runner.go:211] docker network inspect ingress-addon-legacy-211142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0830 23:04:11.155217 1547838 network_create.go:281] running [docker network inspect ingress-addon-legacy-211142] to gather additional debugging logs...
	I0830 23:04:11.155238 1547838 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-211142
	W0830 23:04:11.172571 1547838 cli_runner.go:211] docker network inspect ingress-addon-legacy-211142 returned with exit code 1
	I0830 23:04:11.172608 1547838 network_create.go:284] error running [docker network inspect ingress-addon-legacy-211142]: docker network inspect ingress-addon-legacy-211142: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-211142 not found
	I0830 23:04:11.172624 1547838 network_create.go:286] output of [docker network inspect ingress-addon-legacy-211142]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-211142 not found
	
	** /stderr **
	I0830 23:04:11.172700 1547838 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 23:04:11.190490 1547838 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40001c4810}
	I0830 23:04:11.190532 1547838 network_create.go:123] attempt to create docker network ingress-addon-legacy-211142 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0830 23:04:11.190587 1547838 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-211142 ingress-addon-legacy-211142
	I0830 23:04:11.263820 1547838 network_create.go:107] docker network ingress-addon-legacy-211142 192.168.49.0/24 created
	I0830 23:04:11.263868 1547838 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-211142" container
	I0830 23:04:11.263942 1547838 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0830 23:04:11.279955 1547838 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-211142 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211142 --label created_by.minikube.sigs.k8s.io=true
	I0830 23:04:11.297834 1547838 oci.go:103] Successfully created a docker volume ingress-addon-legacy-211142
	I0830 23:04:11.297920 1547838 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-211142-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211142 --entrypoint /usr/bin/test -v ingress-addon-legacy-211142:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec -d /var/lib
	I0830 23:04:12.642040 1547838 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-211142-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211142 --entrypoint /usr/bin/test -v ingress-addon-legacy-211142:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec -d /var/lib: (1.344065775s)
	I0830 23:04:12.642075 1547838 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-211142
	I0830 23:04:12.642096 1547838 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0830 23:04:12.642120 1547838 kic.go:190] Starting extracting preloaded images to volume ...
	I0830 23:04:12.642198 1547838 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-211142:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec -I lz4 -xf /preloaded.tar -C /extractDir
	I0830 23:04:17.145483 1547838 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-211142:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.503241043s)
	I0830 23:04:17.145515 1547838 kic.go:199] duration metric: took 4.503397 seconds to extract preloaded images to volume
	W0830 23:04:17.145654 1547838 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0830 23:04:17.145760 1547838 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0830 23:04:17.211316 1547838 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-211142 --name ingress-addon-legacy-211142 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-211142 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-211142 --network ingress-addon-legacy-211142 --ip 192.168.49.2 --volume ingress-addon-legacy-211142:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec
	I0830 23:04:17.550436 1547838 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211142 --format={{.State.Running}}
	I0830 23:04:17.575229 1547838 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211142 --format={{.State.Status}}
	I0830 23:04:17.600370 1547838 cli_runner.go:164] Run: docker exec ingress-addon-legacy-211142 stat /var/lib/dpkg/alternatives/iptables
	I0830 23:04:17.667571 1547838 oci.go:144] the created container "ingress-addon-legacy-211142" has a running status.
	I0830 23:04:17.667601 1547838 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa...
	I0830 23:04:17.898759 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0830 23:04:17.898801 1547838 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0830 23:04:17.928580 1547838 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211142 --format={{.State.Status}}
	I0830 23:04:17.951740 1547838 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0830 23:04:17.951765 1547838 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-211142 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0830 23:04:18.057558 1547838 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211142 --format={{.State.Status}}
	I0830 23:04:18.088343 1547838 machine.go:88] provisioning docker machine ...
	I0830 23:04:18.088376 1547838 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-211142"
	I0830 23:04:18.088441 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:18.125090 1547838 main.go:141] libmachine: Using SSH client type: native
	I0830 23:04:18.125868 1547838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34357 <nil> <nil>}
	I0830 23:04:18.125889 1547838 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-211142 && echo "ingress-addon-legacy-211142" | sudo tee /etc/hostname
	I0830 23:04:18.126490 1547838 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0830 23:04:21.283447 1547838 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-211142
	
	I0830 23:04:21.283527 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:21.301575 1547838 main.go:141] libmachine: Using SSH client type: native
	I0830 23:04:21.302047 1547838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34357 <nil> <nil>}
	I0830 23:04:21.302071 1547838 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-211142' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-211142/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-211142' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0830 23:04:21.446132 1547838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0830 23:04:21.446159 1547838 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17114-1496922/.minikube CaCertPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17114-1496922/.minikube}
	I0830 23:04:21.446187 1547838 ubuntu.go:177] setting up certificates
	I0830 23:04:21.446195 1547838 provision.go:83] configureAuth start
	I0830 23:04:21.446287 1547838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211142
	I0830 23:04:21.464083 1547838 provision.go:138] copyHostCerts
	I0830 23:04:21.464121 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17114-1496922/.minikube/cert.pem
	I0830 23:04:21.464152 1547838 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-1496922/.minikube/cert.pem, removing ...
	I0830 23:04:21.464159 1547838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-1496922/.minikube/cert.pem
	I0830 23:04:21.464234 1547838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17114-1496922/.minikube/cert.pem (1123 bytes)
	I0830 23:04:21.464308 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17114-1496922/.minikube/key.pem
	I0830 23:04:21.464324 1547838 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-1496922/.minikube/key.pem, removing ...
	I0830 23:04:21.464328 1547838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-1496922/.minikube/key.pem
	I0830 23:04:21.464353 1547838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17114-1496922/.minikube/key.pem (1679 bytes)
	I0830 23:04:21.464389 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.pem
	I0830 23:04:21.464403 1547838 exec_runner.go:144] found /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.pem, removing ...
	I0830 23:04:21.464412 1547838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.pem
	I0830 23:04:21.464436 1547838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.pem (1082 bytes)
	I0830 23:04:21.464477 1547838 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-211142 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-211142]
	I0830 23:04:21.991737 1547838 provision.go:172] copyRemoteCerts
	I0830 23:04:21.991802 1547838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0830 23:04:21.991852 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:22.008823 1547838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34357 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa Username:docker}
	I0830 23:04:22.111024 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0830 23:04:22.111086 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0830 23:04:22.137187 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0830 23:04:22.137266 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0830 23:04:22.163557 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0830 23:04:22.163616 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0830 23:04:22.190278 1547838 provision.go:86] duration metric: configureAuth took 744.041884ms
	I0830 23:04:22.190302 1547838 ubuntu.go:193] setting minikube options for container-runtime
	I0830 23:04:22.190485 1547838 config.go:182] Loaded profile config "ingress-addon-legacy-211142": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0830 23:04:22.190534 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:22.208264 1547838 main.go:141] libmachine: Using SSH client type: native
	I0830 23:04:22.208706 1547838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34357 <nil> <nil>}
	I0830 23:04:22.208721 1547838 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0830 23:04:22.354514 1547838 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0830 23:04:22.354570 1547838 ubuntu.go:71] root file system type: overlay
	I0830 23:04:22.354687 1547838 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0830 23:04:22.354761 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:22.372627 1547838 main.go:141] libmachine: Using SSH client type: native
	I0830 23:04:22.373059 1547838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34357 <nil> <nil>}
	I0830 23:04:22.373271 1547838 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0830 23:04:22.527268 1547838 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0830 23:04:22.527366 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:22.545393 1547838 main.go:141] libmachine: Using SSH client type: native
	I0830 23:04:22.545822 1547838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 34357 <nil> <nil>}
	I0830 23:04:22.545846 1547838 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0830 23:04:23.364300 1547838 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-21 20:33:53.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-08-30 23:04:22.522661243 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0830 23:04:23.364337 1547838 machine.go:91] provisioned docker machine in 5.275967949s
	I0830 23:04:23.364348 1547838 client.go:171] LocalClient.Create took 12.228363703s
	I0830 23:04:23.364369 1547838 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-211142" took 12.228411227s
	I0830 23:04:23.364381 1547838 start.go:300] post-start starting for "ingress-addon-legacy-211142" (driver="docker")
	I0830 23:04:23.364394 1547838 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0830 23:04:23.364461 1547838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0830 23:04:23.364505 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:23.386324 1547838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34357 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa Username:docker}
	I0830 23:04:23.488000 1547838 ssh_runner.go:195] Run: cat /etc/os-release
	I0830 23:04:23.491946 1547838 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0830 23:04:23.491983 1547838 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0830 23:04:23.492013 1547838 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0830 23:04:23.492026 1547838 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0830 23:04:23.492036 1547838 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-1496922/.minikube/addons for local assets ...
	I0830 23:04:23.492112 1547838 filesync.go:126] Scanning /home/jenkins/minikube-integration/17114-1496922/.minikube/files for local assets ...
	I0830 23:04:23.492190 1547838 filesync.go:149] local asset: /home/jenkins/minikube-integration/17114-1496922/.minikube/files/etc/ssl/certs/15023032.pem -> 15023032.pem in /etc/ssl/certs
	I0830 23:04:23.492203 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/files/etc/ssl/certs/15023032.pem -> /etc/ssl/certs/15023032.pem
	I0830 23:04:23.492304 1547838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0830 23:04:23.502522 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/files/etc/ssl/certs/15023032.pem --> /etc/ssl/certs/15023032.pem (1708 bytes)
	I0830 23:04:23.529296 1547838 start.go:303] post-start completed in 164.898601ms
	I0830 23:04:23.529668 1547838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211142
	I0830 23:04:23.549790 1547838 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/config.json ...
	I0830 23:04:23.550050 1547838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 23:04:23.550106 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:23.567099 1547838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34357 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa Username:docker}
	I0830 23:04:23.663051 1547838 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0830 23:04:23.668627 1547838 start.go:128] duration metric: createHost completed in 12.5350665s
	I0830 23:04:23.668652 1547838 start.go:83] releasing machines lock for "ingress-addon-legacy-211142", held for 12.535183128s
	I0830 23:04:23.668722 1547838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-211142
	I0830 23:04:23.685954 1547838 ssh_runner.go:195] Run: cat /version.json
	I0830 23:04:23.685991 1547838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0830 23:04:23.686013 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:23.686048 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:04:23.706963 1547838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34357 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa Username:docker}
	I0830 23:04:23.717228 1547838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34357 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa Username:docker}
	I0830 23:04:23.934349 1547838 ssh_runner.go:195] Run: systemctl --version
	I0830 23:04:23.939665 1547838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0830 23:04:23.944964 1547838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0830 23:04:23.974959 1547838 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0830 23:04:23.975038 1547838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0830 23:04:23.995306 1547838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0830 23:04:24.016185 1547838 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0830 23:04:24.016216 1547838 start.go:466] detecting cgroup driver to use...
	I0830 23:04:24.016251 1547838 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 23:04:24.016361 1547838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 23:04:24.036298 1547838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0830 23:04:24.047851 1547838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0830 23:04:24.060787 1547838 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0830 23:04:24.060852 1547838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0830 23:04:24.072553 1547838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 23:04:24.084523 1547838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0830 23:04:24.096074 1547838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0830 23:04:24.107602 1547838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0830 23:04:24.118464 1547838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0830 23:04:24.130267 1547838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0830 23:04:24.140435 1547838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0830 23:04:24.150405 1547838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 23:04:24.249315 1547838 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0830 23:04:24.349163 1547838 start.go:466] detecting cgroup driver to use...
	I0830 23:04:24.349250 1547838 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0830 23:04:24.349327 1547838 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0830 23:04:24.367239 1547838 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0830 23:04:24.367319 1547838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0830 23:04:24.382704 1547838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0830 23:04:24.403783 1547838 ssh_runner.go:195] Run: which cri-dockerd
	I0830 23:04:24.408497 1547838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0830 23:04:24.419637 1547838 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0830 23:04:24.447285 1547838 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0830 23:04:24.559471 1547838 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0830 23:04:24.665543 1547838 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0830 23:04:24.665574 1547838 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0830 23:04:24.687092 1547838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 23:04:24.788218 1547838 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0830 23:04:25.062893 1547838 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0830 23:04:25.090408 1547838 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0830 23:04:25.123885 1547838 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.5 ...
	I0830 23:04:25.123994 1547838 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-211142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0830 23:04:25.142445 1547838 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0830 23:04:25.147189 1547838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 23:04:25.160433 1547838 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0830 23:04:25.160500 1547838 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0830 23:04:25.181346 1547838 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0830 23:04:25.181374 1547838 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0830 23:04:25.181427 1547838 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0830 23:04:25.191913 1547838 ssh_runner.go:195] Run: which lz4
	I0830 23:04:25.195982 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0830 23:04:25.196072 1547838 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0830 23:04:25.200003 1547838 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0830 23:04:25.200036 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0830 23:04:27.296215 1547838 docker.go:600] Took 2.100174 seconds to copy over tarball
	I0830 23:04:27.296287 1547838 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0830 23:04:29.664842 1547838 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.368528035s)
	I0830 23:04:29.664866 1547838 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0830 23:04:29.736917 1547838 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0830 23:04:29.747423 1547838 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0830 23:04:29.767660 1547838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0830 23:04:29.869393 1547838 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0830 23:04:32.582933 1547838 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.713507297s)
	I0830 23:04:32.583013 1547838 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0830 23:04:32.603305 1547838 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0830 23:04:32.603329 1547838 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0830 23:04:32.603336 1547838 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0830 23:04:32.604778 1547838 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 23:04:32.604951 1547838 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0830 23:04:32.605088 1547838 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0830 23:04:32.605238 1547838 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 23:04:32.605454 1547838 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 23:04:32.605522 1547838 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 23:04:32.605574 1547838 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0830 23:04:32.605671 1547838 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0830 23:04:32.606127 1547838 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0830 23:04:32.606561 1547838 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 23:04:32.606869 1547838 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0830 23:04:32.607169 1547838 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 23:04:32.607432 1547838 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 23:04:32.607915 1547838 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 23:04:32.608085 1547838 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0830 23:04:32.608762 1547838 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	W0830 23:04:33.044163 1547838 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0830 23:04:33.044336 1547838 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 23:04:33.055483 1547838 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0830 23:04:33.056119 1547838 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0830 23:04:33.056300 1547838 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0830 23:04:33.079740 1547838 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0830 23:04:33.079820 1547838 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 23:04:33.079891 1547838 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0830 23:04:33.083248 1547838 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0830 23:04:33.083321 1547838 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0830 23:04:33.083395 1547838 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	W0830 23:04:33.083780 1547838 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0830 23:04:33.083961 1547838 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0830 23:04:33.090423 1547838 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0830 23:04:33.090709 1547838 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0830 23:04:33.101248 1547838 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0830 23:04:33.101532 1547838 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0830 23:04:33.101972 1547838 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0830 23:04:33.102229 1547838 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0830 23:04:33.116337 1547838 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0830 23:04:33.116425 1547838 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0830 23:04:33.116519 1547838 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0830 23:04:33.145751 1547838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0830 23:04:33.183391 1547838 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0830 23:04:33.183551 1547838 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0830 23:04:33.183470 1547838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0830 23:04:33.183669 1547838 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0830 23:04:33.183863 1547838 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0830 23:04:33.183907 1547838 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0830 23:04:33.183948 1547838 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0830 23:04:33.186434 1547838 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0830 23:04:33.186499 1547838 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0830 23:04:33.186569 1547838 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0830 23:04:33.186832 1547838 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0830 23:04:33.186877 1547838 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0830 23:04:33.186926 1547838 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0830 23:04:33.195898 1547838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0830 23:04:33.247829 1547838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0830 23:04:33.247932 1547838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0830 23:04:33.248010 1547838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0830 23:04:33.248066 1547838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0830 23:04:33.319601 1547838 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0830 23:04:33.319772 1547838 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 23:04:33.339482 1547838 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0830 23:04:33.339526 1547838 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 23:04:33.339572 1547838 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 23:04:33.398827 1547838 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0830 23:04:33.398901 1547838 cache_images.go:92] LoadImages completed in 795.553811ms
	W0830 23:04:33.398974 1547838 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I0830 23:04:33.399032 1547838 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0830 23:04:33.459610 1547838 cni.go:84] Creating CNI manager for ""
	I0830 23:04:33.459632 1547838 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0830 23:04:33.459662 1547838 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0830 23:04:33.459680 1547838 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-211142 NodeName:ingress-addon-legacy-211142 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0830 23:04:33.459820 1547838 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-211142"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0830 23:04:33.459905 1547838 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-211142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0830 23:04:33.459973 1547838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0830 23:04:33.470439 1547838 binaries.go:44] Found k8s binaries, skipping transfer
	I0830 23:04:33.470529 1547838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0830 23:04:33.480703 1547838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0830 23:04:33.501568 1547838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0830 23:04:33.521978 1547838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0830 23:04:33.542593 1547838 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0830 23:04:33.546984 1547838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0830 23:04:33.560050 1547838 certs.go:56] Setting up /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142 for IP: 192.168.49.2
	I0830 23:04:33.560078 1547838 certs.go:190] acquiring lock for shared ca certs: {Name:mkb3bc561ee04b0a6895c261d3178d0156e44f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:04:33.560218 1547838 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.key
	I0830 23:04:33.560266 1547838 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.key
	I0830 23:04:33.560320 1547838 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.key
	I0830 23:04:33.560351 1547838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt with IP's: []
	I0830 23:04:34.221954 1547838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt ...
	I0830 23:04:34.221984 1547838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: {Name:mk5765fdf24957717d16e4d8ba5c4cc4b447e1da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:04:34.222203 1547838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.key ...
	I0830 23:04:34.222216 1547838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.key: {Name:mk0cfeba09d8e4fe759b81f59a853ab1dae6f8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:04:34.222309 1547838 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.key.dd3b5fb2
	I0830 23:04:34.222324 1547838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0830 23:04:34.471443 1547838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.crt.dd3b5fb2 ...
	I0830 23:04:34.471473 1547838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.crt.dd3b5fb2: {Name:mk777992e92a2db384b8a584324e01f6617a03cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:04:34.471653 1547838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.key.dd3b5fb2 ...
	I0830 23:04:34.471664 1547838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.key.dd3b5fb2: {Name:mkb74e78535e4d830aa90d6999907e489066baba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:04:34.471747 1547838 certs.go:337] copying /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.crt
	I0830 23:04:34.471824 1547838 certs.go:341] copying /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.key
	I0830 23:04:34.471877 1547838 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.key
	I0830 23:04:34.471893 1547838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.crt with IP's: []
	I0830 23:04:34.754537 1547838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.crt ...
	I0830 23:04:34.754566 1547838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.crt: {Name:mk8787d33f3b0d6fb2fa98de324350f8597d5ce6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:04:34.754748 1547838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.key ...
	I0830 23:04:34.754763 1547838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.key: {Name:mk618fe36a9cbb3aedee37a1a4de5cd3432fbace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:04:34.754844 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0830 23:04:34.754863 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0830 23:04:34.754877 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0830 23:04:34.754893 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0830 23:04:34.754914 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0830 23:04:34.754933 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0830 23:04:34.754947 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0830 23:04:34.754957 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0830 23:04:34.755008 1547838 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/1502303.pem (1338 bytes)
	W0830 23:04:34.755049 1547838 certs.go:433] ignoring /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/1502303_empty.pem, impossibly tiny 0 bytes
	I0830 23:04:34.755062 1547838 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca-key.pem (1675 bytes)
	I0830 23:04:34.755089 1547838 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/ca.pem (1082 bytes)
	I0830 23:04:34.755119 1547838 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/cert.pem (1123 bytes)
	I0830 23:04:34.755148 1547838 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/certs/key.pem (1679 bytes)
	I0830 23:04:34.755198 1547838 certs.go:437] found cert: /home/jenkins/minikube-integration/17114-1496922/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17114-1496922/.minikube/files/etc/ssl/certs/15023032.pem (1708 bytes)
	I0830 23:04:34.755232 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0830 23:04:34.755249 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/1502303.pem -> /usr/share/ca-certificates/1502303.pem
	I0830 23:04:34.755259 1547838 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17114-1496922/.minikube/files/etc/ssl/certs/15023032.pem -> /usr/share/ca-certificates/15023032.pem
	I0830 23:04:34.755895 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0830 23:04:34.783523 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0830 23:04:34.811424 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0830 23:04:34.838388 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0830 23:04:34.865100 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0830 23:04:34.893632 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0830 23:04:34.920571 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0830 23:04:34.948918 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0830 23:04:34.976256 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0830 23:04:35.003257 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/certs/1502303.pem --> /usr/share/ca-certificates/1502303.pem (1338 bytes)
	I0830 23:04:35.030909 1547838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17114-1496922/.minikube/files/etc/ssl/certs/15023032.pem --> /usr/share/ca-certificates/15023032.pem (1708 bytes)
	I0830 23:04:35.057898 1547838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0830 23:04:35.078601 1547838 ssh_runner.go:195] Run: openssl version
	I0830 23:04:35.085732 1547838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1502303.pem && ln -fs /usr/share/ca-certificates/1502303.pem /etc/ssl/certs/1502303.pem"
	I0830 23:04:35.097350 1547838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1502303.pem
	I0830 23:04:35.101863 1547838 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 30 22:59 /usr/share/ca-certificates/1502303.pem
	I0830 23:04:35.101926 1547838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1502303.pem
	I0830 23:04:35.110350 1547838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1502303.pem /etc/ssl/certs/51391683.0"
	I0830 23:04:35.122620 1547838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023032.pem && ln -fs /usr/share/ca-certificates/15023032.pem /etc/ssl/certs/15023032.pem"
	I0830 23:04:35.133839 1547838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023032.pem
	I0830 23:04:35.138406 1547838 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 30 22:59 /usr/share/ca-certificates/15023032.pem
	I0830 23:04:35.138471 1547838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023032.pem
	I0830 23:04:35.146921 1547838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15023032.pem /etc/ssl/certs/3ec20f2e.0"
	I0830 23:04:35.158699 1547838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0830 23:04:35.169829 1547838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0830 23:04:35.174349 1547838 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 30 22:54 /usr/share/ca-certificates/minikubeCA.pem
	I0830 23:04:35.174441 1547838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0830 23:04:35.182801 1547838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0830 23:04:35.193945 1547838 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0830 23:04:35.198079 1547838 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0830 23:04:35.198123 1547838 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-211142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-211142 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 23:04:35.198242 1547838 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0830 23:04:35.217650 1547838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0830 23:04:35.227739 1547838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0830 23:04:35.237878 1547838 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0830 23:04:35.237976 1547838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0830 23:04:35.248130 1547838 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0830 23:04:35.248173 1547838 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0830 23:04:35.307789 1547838 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0830 23:04:35.308113 1547838 kubeadm.go:322] [preflight] Running pre-flight checks
	I0830 23:04:35.516364 1547838 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0830 23:04:35.516442 1547838 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1043-aws
	I0830 23:04:35.516496 1547838 kubeadm.go:322] DOCKER_VERSION: 24.0.5
	I0830 23:04:35.516533 1547838 kubeadm.go:322] OS: Linux
	I0830 23:04:35.516581 1547838 kubeadm.go:322] CGROUPS_CPU: enabled
	I0830 23:04:35.516630 1547838 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0830 23:04:35.516680 1547838 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0830 23:04:35.516732 1547838 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0830 23:04:35.516788 1547838 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0830 23:04:35.516837 1547838 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0830 23:04:35.609691 1547838 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0830 23:04:35.609797 1547838 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0830 23:04:35.609890 1547838 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0830 23:04:35.805061 1547838 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0830 23:04:35.806551 1547838 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0830 23:04:35.806887 1547838 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0830 23:04:35.918279 1547838 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0830 23:04:35.923073 1547838 out.go:204]   - Generating certificates and keys ...
	I0830 23:04:35.923242 1547838 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0830 23:04:35.923336 1547838 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0830 23:04:36.164325 1547838 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0830 23:04:36.339664 1547838 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0830 23:04:36.787995 1547838 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0830 23:04:37.138580 1547838 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0830 23:04:37.495704 1547838 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0830 23:04:37.496117 1547838 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-211142 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0830 23:04:37.728739 1547838 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0830 23:04:37.729109 1547838 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-211142 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0830 23:04:38.513091 1547838 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0830 23:04:38.879785 1547838 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0830 23:04:39.654723 1547838 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0830 23:04:39.655229 1547838 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0830 23:04:40.100631 1547838 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0830 23:04:40.525727 1547838 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0830 23:04:41.541909 1547838 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0830 23:04:42.425644 1547838 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0830 23:04:42.426488 1547838 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0830 23:04:42.428675 1547838 out.go:204]   - Booting up control plane ...
	I0830 23:04:42.428777 1547838 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0830 23:04:42.444523 1547838 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0830 23:04:42.444607 1547838 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0830 23:04:42.444699 1547838 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0830 23:04:42.444848 1547838 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0830 23:04:54.447713 1547838 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002655 seconds
	I0830 23:04:54.447832 1547838 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0830 23:04:54.460928 1547838 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0830 23:04:54.979590 1547838 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0830 23:04:54.979733 1547838 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-211142 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0830 23:04:55.488042 1547838 kubeadm.go:322] [bootstrap-token] Using token: ea15yv.pq117l21ut52d1l3
	I0830 23:04:55.490084 1547838 out.go:204]   - Configuring RBAC rules ...
	I0830 23:04:55.490205 1547838 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0830 23:04:55.494562 1547838 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0830 23:04:55.503830 1547838 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0830 23:04:55.506525 1547838 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0830 23:04:55.509375 1547838 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0830 23:04:55.511966 1547838 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0830 23:04:55.520918 1547838 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0830 23:04:55.837599 1547838 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0830 23:04:55.918887 1547838 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0830 23:04:55.920729 1547838 kubeadm.go:322] 
	I0830 23:04:55.920796 1547838 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0830 23:04:55.920822 1547838 kubeadm.go:322] 
	I0830 23:04:55.920896 1547838 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0830 23:04:55.920903 1547838 kubeadm.go:322] 
	I0830 23:04:55.920927 1547838 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0830 23:04:55.921513 1547838 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0830 23:04:55.921567 1547838 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0830 23:04:55.921572 1547838 kubeadm.go:322] 
	I0830 23:04:55.921621 1547838 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0830 23:04:55.921699 1547838 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0830 23:04:55.921764 1547838 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0830 23:04:55.921775 1547838 kubeadm.go:322] 
	I0830 23:04:55.922729 1547838 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0830 23:04:55.922807 1547838 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0830 23:04:55.922812 1547838 kubeadm.go:322] 
	I0830 23:04:55.923755 1547838 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ea15yv.pq117l21ut52d1l3 \
	I0830 23:04:55.923862 1547838 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:631643f16b21814ec8cd841eb99cc8a19ba92b2dc9b8745ca1e490484be9b150 \
	I0830 23:04:55.924169 1547838 kubeadm.go:322]     --control-plane 
	I0830 23:04:55.924179 1547838 kubeadm.go:322] 
	I0830 23:04:55.924523 1547838 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0830 23:04:55.924533 1547838 kubeadm.go:322] 
	I0830 23:04:55.924855 1547838 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ea15yv.pq117l21ut52d1l3 \
	I0830 23:04:55.925223 1547838 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:631643f16b21814ec8cd841eb99cc8a19ba92b2dc9b8745ca1e490484be9b150 
	I0830 23:04:55.933661 1547838 kubeadm.go:322] W0830 23:04:35.307047    1658 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0830 23:04:55.933839 1547838 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0830 23:04:55.933964 1547838 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
	I0830 23:04:55.934164 1547838 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1043-aws\n", err: exit status 1
	I0830 23:04:55.934262 1547838 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0830 23:04:55.934378 1547838 kubeadm.go:322] W0830 23:04:42.434765    1658 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0830 23:04:55.934494 1547838 kubeadm.go:322] W0830 23:04:42.437264    1658 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0830 23:04:55.934507 1547838 cni.go:84] Creating CNI manager for ""
	I0830 23:04:55.934520 1547838 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0830 23:04:55.934535 1547838 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0830 23:04:55.934654 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:55.934731 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5 minikube.k8s.io/name=ingress-addon-legacy-211142 minikube.k8s.io/updated_at=2023_08_30T23_04_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:56.416206 1547838 ops.go:34] apiserver oom_adj: -16
	I0830 23:04:56.416284 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:56.509913 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:57.114592 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:57.614339 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:58.114488 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:58.614351 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:59.114783 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:04:59.614374 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:00.114283 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:00.614614 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:01.115039 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:01.614926 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:02.114761 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:02.614878 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:03.114674 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:03.614347 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:04.115287 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:04.615261 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:05.114391 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:05.614303 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:06.115079 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:06.614339 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:07.114766 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:07.614863 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:08.114430 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:08.615010 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:09.114888 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:09.615145 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:10.114929 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:10.615335 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:11.115161 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:11.615020 1547838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0830 23:05:11.862279 1547838 kubeadm.go:1081] duration metric: took 15.927673355s to wait for elevateKubeSystemPrivileges.
	I0830 23:05:11.862304 1547838 kubeadm.go:406] StartCluster complete in 36.664183879s
	I0830 23:05:11.862327 1547838 settings.go:142] acquiring lock: {Name:mk4f2036520f4cce49c9f101737e8fce8f8975fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:05:11.862381 1547838 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17114-1496922/kubeconfig
	I0830 23:05:11.863141 1547838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/kubeconfig: {Name:mkf4ec4235f416d6c5c702dfdbfaa4d81e4df4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 23:05:11.863805 1547838 kapi.go:59] client config for ingress-addon-legacy-211142: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.key", CAFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 23:05:11.865619 1547838 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0830 23:05:11.865682 1547838 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-211142"
	I0830 23:05:11.865695 1547838 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-211142"
	I0830 23:05:11.865750 1547838 host.go:66] Checking if "ingress-addon-legacy-211142" exists ...
	I0830 23:05:11.866221 1547838 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211142 --format={{.State.Status}}
	I0830 23:05:11.866377 1547838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0830 23:05:11.866567 1547838 cert_rotation.go:137] Starting client certificate rotation controller
	I0830 23:05:11.866639 1547838 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-211142"
	I0830 23:05:11.866667 1547838 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-211142"
	I0830 23:05:11.866948 1547838 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211142 --format={{.State.Status}}
	I0830 23:05:11.867267 1547838 config.go:182] Loaded profile config "ingress-addon-legacy-211142": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0830 23:05:11.905936 1547838 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0830 23:05:11.907681 1547838 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 23:05:11.907700 1547838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0830 23:05:11.907798 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:05:11.925443 1547838 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-211142" context rescaled to 1 replicas
	I0830 23:05:11.925479 1547838 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0830 23:05:11.927740 1547838 out.go:177] * Verifying Kubernetes components...
	I0830 23:05:11.926202 1547838 kapi.go:59] client config for ingress-addon-legacy-211142: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.key", CAFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 23:05:11.930628 1547838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 23:05:11.950649 1547838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34357 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa Username:docker}
	I0830 23:05:11.963699 1547838 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-211142"
	I0830 23:05:11.963746 1547838 host.go:66] Checking if "ingress-addon-legacy-211142" exists ...
	I0830 23:05:11.964203 1547838 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-211142 --format={{.State.Status}}
	I0830 23:05:11.991797 1547838 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0830 23:05:11.991820 1547838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0830 23:05:11.991893 1547838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-211142
	I0830 23:05:12.020642 1547838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34357 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/ingress-addon-legacy-211142/id_rsa Username:docker}
	I0830 23:05:12.238447 1547838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0830 23:05:12.239136 1547838 kapi.go:59] client config for ingress-addon-legacy-211142: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt", KeyFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.key", CAFile:"/home/jenkins/minikube-integration/17114-1496922/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1723960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0830 23:05:12.239455 1547838 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-211142" to be "Ready" ...
	I0830 23:05:12.243227 1547838 node_ready.go:49] node "ingress-addon-legacy-211142" has status "Ready":"True"
	I0830 23:05:12.243247 1547838 node_ready.go:38] duration metric: took 3.766321ms waiting for node "ingress-addon-legacy-211142" to be "Ready" ...
	I0830 23:05:12.243256 1547838 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 23:05:12.251537 1547838 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-f29vx" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:12.278902 1547838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0830 23:05:12.384552 1547838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0830 23:05:13.121192 1547838 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0830 23:05:13.247588 1547838 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0830 23:05:13.249361 1547838 addons.go:502] enable addons completed in 1.383731355s: enabled=[storage-provisioner default-storageclass]
	I0830 23:05:14.286117 1547838 pod_ready.go:102] pod "coredns-66bff467f8-f29vx" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:16.286344 1547838 pod_ready.go:102] pod "coredns-66bff467f8-f29vx" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:18.785754 1547838 pod_ready.go:102] pod "coredns-66bff467f8-f29vx" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:19.782590 1547838 pod_ready.go:97] error getting pod "coredns-66bff467f8-f29vx" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-f29vx" not found
	I0830 23:05:19.782616 1547838 pod_ready.go:81] duration metric: took 7.53104874s waiting for pod "coredns-66bff467f8-f29vx" in "kube-system" namespace to be "Ready" ...
	E0830 23:05:19.782627 1547838 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-f29vx" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-f29vx" not found
	I0830 23:05:19.782635 1547838 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:21.797507 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:23.798024 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:25.800707 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:28.297389 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:30.298017 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:32.797629 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:34.798003 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:36.798322 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:39.298198 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:41.798176 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:44.296923 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:46.297915 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:48.298368 1547838 pod_ready.go:102] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"False"
	I0830 23:05:49.798405 1547838 pod_ready.go:92] pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace has status "Ready":"True"
	I0830 23:05:49.798429 1547838 pod_ready.go:81] duration metric: took 30.015787359s waiting for pod "coredns-66bff467f8-x8fxm" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.798440 1547838 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-211142" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.803017 1547838 pod_ready.go:92] pod "etcd-ingress-addon-legacy-211142" in "kube-system" namespace has status "Ready":"True"
	I0830 23:05:49.803044 1547838 pod_ready.go:81] duration metric: took 4.596311ms waiting for pod "etcd-ingress-addon-legacy-211142" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.803056 1547838 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-211142" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.808024 1547838 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-211142" in "kube-system" namespace has status "Ready":"True"
	I0830 23:05:49.808050 1547838 pod_ready.go:81] duration metric: took 4.986112ms waiting for pod "kube-apiserver-ingress-addon-legacy-211142" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.808061 1547838 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-211142" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.812347 1547838 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-211142" in "kube-system" namespace has status "Ready":"True"
	I0830 23:05:49.812372 1547838 pod_ready.go:81] duration metric: took 4.303609ms waiting for pod "kube-controller-manager-ingress-addon-legacy-211142" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.812387 1547838 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-plq5r" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.816843 1547838 pod_ready.go:92] pod "kube-proxy-plq5r" in "kube-system" namespace has status "Ready":"True"
	I0830 23:05:49.816866 1547838 pod_ready.go:81] duration metric: took 4.471117ms waiting for pod "kube-proxy-plq5r" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.816880 1547838 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-211142" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:49.993039 1547838 request.go:629] Waited for 176.072026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-211142
	I0830 23:05:50.194057 1547838 request.go:629] Waited for 198.378414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-211142
	I0830 23:05:50.196820 1547838 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-211142" in "kube-system" namespace has status "Ready":"True"
	I0830 23:05:50.196848 1547838 pod_ready.go:81] duration metric: took 379.959719ms waiting for pod "kube-scheduler-ingress-addon-legacy-211142" in "kube-system" namespace to be "Ready" ...
	I0830 23:05:50.196859 1547838 pod_ready.go:38] duration metric: took 37.953561975s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0830 23:05:50.196905 1547838 api_server.go:52] waiting for apiserver process to appear ...
	I0830 23:05:50.197015 1547838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 23:05:50.211403 1547838 api_server.go:72] duration metric: took 38.285895584s to wait for apiserver process to appear ...
	I0830 23:05:50.211426 1547838 api_server.go:88] waiting for apiserver healthz status ...
	I0830 23:05:50.211442 1547838 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0830 23:05:50.220844 1547838 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0830 23:05:50.221770 1547838 api_server.go:141] control plane version: v1.18.20
	I0830 23:05:50.221792 1547838 api_server.go:131] duration metric: took 10.359931ms to wait for apiserver health ...
	I0830 23:05:50.221804 1547838 system_pods.go:43] waiting for kube-system pods to appear ...
	I0830 23:05:50.393110 1547838 request.go:629] Waited for 171.242313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0830 23:05:50.398734 1547838 system_pods.go:59] 7 kube-system pods found
	I0830 23:05:50.398774 1547838 system_pods.go:61] "coredns-66bff467f8-x8fxm" [92a68edc-a7f2-4aaa-9a78-ed5f7a2955fb] Running
	I0830 23:05:50.398780 1547838 system_pods.go:61] "etcd-ingress-addon-legacy-211142" [c7143f54-e3e1-4428-a0be-746275689337] Running
	I0830 23:05:50.398785 1547838 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-211142" [397673fc-bd1d-48bc-9780-db8daca3eb73] Running
	I0830 23:05:50.398815 1547838 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-211142" [cdf58848-d54b-4ae3-a056-7c5d71562bf8] Running
	I0830 23:05:50.398827 1547838 system_pods.go:61] "kube-proxy-plq5r" [df7035c9-5d9f-4708-a6b3-6d7a06b36dc3] Running
	I0830 23:05:50.398834 1547838 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-211142" [b7f3b2dd-392c-47f1-aa93-47fe83b270aa] Running
	I0830 23:05:50.398839 1547838 system_pods.go:61] "storage-provisioner" [3ac5ce57-ccc4-454d-a117-811baa0ff074] Running
	I0830 23:05:50.398844 1547838 system_pods.go:74] duration metric: took 177.034922ms to wait for pod list to return data ...
	I0830 23:05:50.398855 1547838 default_sa.go:34] waiting for default service account to be created ...
	I0830 23:05:50.593117 1547838 request.go:629] Waited for 194.173865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0830 23:05:50.595645 1547838 default_sa.go:45] found service account: "default"
	I0830 23:05:50.595669 1547838 default_sa.go:55] duration metric: took 196.807265ms for default service account to be created ...
	I0830 23:05:50.595679 1547838 system_pods.go:116] waiting for k8s-apps to be running ...
	I0830 23:05:50.793026 1547838 request.go:629] Waited for 197.267588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0830 23:05:50.798300 1547838 system_pods.go:86] 7 kube-system pods found
	I0830 23:05:50.798335 1547838 system_pods.go:89] "coredns-66bff467f8-x8fxm" [92a68edc-a7f2-4aaa-9a78-ed5f7a2955fb] Running
	I0830 23:05:50.798343 1547838 system_pods.go:89] "etcd-ingress-addon-legacy-211142" [c7143f54-e3e1-4428-a0be-746275689337] Running
	I0830 23:05:50.798348 1547838 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-211142" [397673fc-bd1d-48bc-9780-db8daca3eb73] Running
	I0830 23:05:50.798353 1547838 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-211142" [cdf58848-d54b-4ae3-a056-7c5d71562bf8] Running
	I0830 23:05:50.798358 1547838 system_pods.go:89] "kube-proxy-plq5r" [df7035c9-5d9f-4708-a6b3-6d7a06b36dc3] Running
	I0830 23:05:50.798363 1547838 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-211142" [b7f3b2dd-392c-47f1-aa93-47fe83b270aa] Running
	I0830 23:05:50.798374 1547838 system_pods.go:89] "storage-provisioner" [3ac5ce57-ccc4-454d-a117-811baa0ff074] Running
	I0830 23:05:50.798381 1547838 system_pods.go:126] duration metric: took 202.697802ms to wait for k8s-apps to be running ...
	I0830 23:05:50.798392 1547838 system_svc.go:44] waiting for kubelet service to be running ....
	I0830 23:05:50.798449 1547838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 23:05:50.811636 1547838 system_svc.go:56] duration metric: took 13.23306ms WaitForService to wait for kubelet.
	I0830 23:05:50.811661 1547838 kubeadm.go:581] duration metric: took 38.88615864s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0830 23:05:50.811680 1547838 node_conditions.go:102] verifying NodePressure condition ...
	I0830 23:05:50.994056 1547838 request.go:629] Waited for 182.298268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0830 23:05:50.996684 1547838 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0830 23:05:50.996719 1547838 node_conditions.go:123] node cpu capacity is 2
	I0830 23:05:50.996731 1547838 node_conditions.go:105] duration metric: took 185.046006ms to run NodePressure ...
	I0830 23:05:50.996741 1547838 start.go:228] waiting for startup goroutines ...
	I0830 23:05:50.996748 1547838 start.go:233] waiting for cluster config update ...
	I0830 23:05:50.996758 1547838 start.go:242] writing updated cluster config ...
	I0830 23:05:50.997049 1547838 ssh_runner.go:195] Run: rm -f paused
	I0830 23:05:51.055583 1547838 start.go:600] kubectl: 1.28.1, cluster: 1.18.20 (minor skew: 10)
	I0830 23:05:51.057714 1547838 out.go:177] 
	W0830 23:05:51.059569 1547838 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0830 23:05:51.061436 1547838 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0830 23:05:51.063177 1547838 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-211142" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Aug 30 23:04:32 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:04:32.580570912Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 30 23:04:32 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:04:32.580688352Z" level=info msg="API listen on [::]:2376"
	Aug 30 23:04:32 ingress-addon-legacy-211142 systemd[1]: Started Docker Application Container Engine.
	Aug 30 23:05:13 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:05:13.475865085Z" level=info msg="ignoring event" container=9894fa21ec5018bcff1235b87393a02a592cf3b6b06c283721362b47972ca4b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:05:53 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:05:53.454447974Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Aug 30 23:05:55 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:05:55.002723594Z" level=info msg="ignoring event" container=d0f711132ca42f907c4bd32a0a079113d5837729d9e297a9222521009c6a97e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:05:55 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:05:55.047361535Z" level=info msg="ignoring event" container=6041576c548e6530a1ca623243b858b07806e4e0aa2f996799d24f0c4eb5dcdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:05:55 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:05:55.182601886Z" level=info msg="ignoring event" container=414543960a7249d060a5d86b708046699321ed4b47e1d89462303f351bbbf459 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:05:55 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:05:55.347658950Z" level=info msg="ignoring event" container=45eeb0849ba436008ae2843ad92dc4c071f1f7d90befa2de8eae88b5e75a31e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:05:56 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:05:56.094208771Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Aug 30 23:05:56 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:05:56.213336590Z" level=info msg="ignoring event" container=ed7630b2ff3632a39be809718039bd30af72cd92aa8874d4beea748d909a8153 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:02 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:02.542553838Z" level=warning msg="Published ports are discarded when using host network mode"
	Aug 30 23:06:02 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:02.563220687Z" level=warning msg="Published ports are discarded when using host network mode"
	Aug 30 23:06:02 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:02.714851281Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Aug 30 23:06:08 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:08.647115151Z" level=info msg="ignoring event" container=94c49bb3ba9d72b5487eb1d5efd24b57aa3422e3077182716c6b03b1cb4b9b0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:09 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:09.471388369Z" level=info msg="ignoring event" container=8740b4221ebffcffa4066c94b5d1fefe9172b6c3b16fb5241d962d5573682076 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:25 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:25.119146000Z" level=info msg="ignoring event" container=c2a82965982b065043dd934e7bb968a43f7fd375984a9b835ba416eb8a6828cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:25 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:25.628871628Z" level=info msg="ignoring event" container=9586b1220bc4b7b1850df99e420d69251af62aab5fa1814b18436cddb16d0b89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:26 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:26.577489855Z" level=info msg="ignoring event" container=73f0c94eedb6ebbc4b5c9c8aedac15f2b59f28453b8ef74acb3c14eec3b187c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:39 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:39.511079392Z" level=info msg="ignoring event" container=6b48406e878ab7901548debf26f1013f0d3b1a5a21556ea78d7e6b090150e20d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:39 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:39.599118639Z" level=info msg="ignoring event" container=73cfd0a607e6ff9a430850b5b3999fa6b9d818d5d8ae972b9d7070e25bcb0893 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:52 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:52.334634075Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=cc085af6cf49f965f52b9c93e56eb72115baa3dd190bf90c7b030206cd17ca98
	Aug 30 23:06:52 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:52.350938205Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=cc085af6cf49f965f52b9c93e56eb72115baa3dd190bf90c7b030206cd17ca98
	Aug 30 23:06:52 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:52.430788300Z" level=info msg="ignoring event" container=cc085af6cf49f965f52b9c93e56eb72115baa3dd190bf90c7b030206cd17ca98 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 30 23:06:52 ingress-addon-legacy-211142 dockerd[1301]: time="2023-08-30T23:06:52.494141011Z" level=info msg="ignoring event" container=6b27f7e0fff73bef34da005afeb8c03c6256dafc052735ccc54b1fd0a69a0a77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	73cfd0a607e6f       13753a81eccfd                                                                                                      19 seconds ago       Exited              hello-world-app           2                   fd2b0b682998a       hello-world-app-5f5d8b66bb-dhtw4
	9f4a6006bed88       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      44 seconds ago       Running             nginx                     0                   7fe2c1dedc60a       nginx
	cc085af6cf49f       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   58 seconds ago       Exited              controller                0                   6b27f7e0fff73       ingress-nginx-controller-7fcf777cb7-wf7ss
	45eeb0849ba43       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   ed7630b2ff363       ingress-nginx-admission-patch-rndgj
	6041576c548e6       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   414543960a724       ingress-nginx-admission-create-stmbz
	a7ba58708a03c       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   2ce6384c0db42       storage-provisioner
	d74862223c80c       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   ea9b72cb20583       coredns-66bff467f8-x8fxm
	af970957778d8       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   7bc07ff55252c       kube-proxy-plq5r
	96e97f44359c0       ab707b0a0ea33                                                                                                      2 minutes ago        Running             etcd                      0                   fded24318d00a       etcd-ingress-addon-legacy-211142
	22bf58bf87052       095f37015706d                                                                                                      2 minutes ago        Running             kube-scheduler            0                   a74746e67b552       kube-scheduler-ingress-addon-legacy-211142
	19417b3121957       68a4fac29a865                                                                                                      2 minutes ago        Running             kube-controller-manager   0                   03c323ab88dcf       kube-controller-manager-ingress-addon-legacy-211142
	09c474d13545e       2694cf044d665                                                                                                      2 minutes ago        Running             kube-apiserver            0                   1a97aab9a46fb       kube-apiserver-ingress-addon-legacy-211142
	
	* 
	* ==> coredns [d74862223c80] <==
	* [INFO] 172.17.0.1:5331 - 25762 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034108s
	[INFO] 172.17.0.1:5331 - 52149 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001191063s
	[INFO] 172.17.0.1:51454 - 45677 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001985811s
	[INFO] 172.17.0.1:51454 - 21501 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001465828s
	[INFO] 172.17.0.1:5331 - 35799 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001470841s
	[INFO] 172.17.0.1:51454 - 32262 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000183877s
	[INFO] 172.17.0.1:5331 - 19738 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066388s
	[INFO] 172.17.0.1:24370 - 55879 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084923s
	[INFO] 172.17.0.1:25162 - 40188 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000153444s
	[INFO] 172.17.0.1:25162 - 58474 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004928s
	[INFO] 172.17.0.1:24370 - 25133 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000118851s
	[INFO] 172.17.0.1:25162 - 13324 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000052283s
	[INFO] 172.17.0.1:24370 - 10508 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059668s
	[INFO] 172.17.0.1:24370 - 34413 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004407s
	[INFO] 172.17.0.1:25162 - 36918 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057403s
	[INFO] 172.17.0.1:24370 - 19386 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005568s
	[INFO] 172.17.0.1:24370 - 29261 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048985s
	[INFO] 172.17.0.1:25162 - 64766 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056459s
	[INFO] 172.17.0.1:25162 - 61712 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00007392s
	[INFO] 172.17.0.1:24370 - 691 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001330772s
	[INFO] 172.17.0.1:25162 - 53886 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000812298s
	[INFO] 172.17.0.1:24370 - 39930 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000850452s
	[INFO] 172.17.0.1:24370 - 25166 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096787s
	[INFO] 172.17.0.1:25162 - 47559 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000971396s
	[INFO] 172.17.0.1:25162 - 57483 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041001s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-211142
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-211142
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcfed3f069eb419c2ffae8f904d3fba5b9405fc5
	                    minikube.k8s.io/name=ingress-addon-legacy-211142
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_30T23_04_55_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 30 Aug 2023 23:04:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-211142
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 30 Aug 2023 23:06:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 30 Aug 2023 23:06:29 +0000   Wed, 30 Aug 2023 23:04:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 30 Aug 2023 23:06:29 +0000   Wed, 30 Aug 2023 23:04:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 30 Aug 2023 23:06:29 +0000   Wed, 30 Aug 2023 23:04:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 30 Aug 2023 23:06:29 +0000   Wed, 30 Aug 2023 23:05:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-211142
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1a7eaa7fdd54f3ea035d077800017a9
	  System UUID:                59bb35a3-9959-4a2a-bf6c-92a2807bb643
	  Boot ID:                    b8a33901-d088-4f70-8e50-554d8f07ad5d
	  Kernel Version:             5.15.0-1043-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-dhtw4                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 coredns-66bff467f8-x8fxm                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     107s
	  kube-system                 etcd-ingress-addon-legacy-211142                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-apiserver-ingress-addon-legacy-211142             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-211142    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-proxy-plq5r                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-ingress-addon-legacy-211142             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m14s (x5 over 2m14s)  kubelet     Node ingress-addon-legacy-211142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x5 over 2m14s)  kubelet     Node ingress-addon-legacy-211142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x4 over 2m14s)  kubelet     Node ingress-addon-legacy-211142 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s                   kubelet     Node ingress-addon-legacy-211142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s                   kubelet     Node ingress-addon-legacy-211142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s                   kubelet     Node ingress-addon-legacy-211142 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                109s                   kubelet     Node ingress-addon-legacy-211142 status is now: NodeReady
	  Normal  Starting                 106s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001037] FS-Cache: O-key=[8] 'f0643b0000000000'
	[  +0.000701] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=0000000095b1c4f2{9p.inode} n=00000000a3dadce9
	[  +0.001047] FS-Cache: N-key=[8] 'f0643b0000000000'
	[  +0.003017] FS-Cache: Duplicate cookie detected
	[  +0.000678] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000965] FS-Cache: O-cookie d=0000000095b1c4f2{9p.inode} n=000000004eb80a15
	[  +0.001056] FS-Cache: O-key=[8] 'f0643b0000000000'
	[  +0.000710] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000987] FS-Cache: N-cookie d=0000000095b1c4f2{9p.inode} n=0000000058819e18
	[  +0.001069] FS-Cache: N-key=[8] 'f0643b0000000000'
	[  +2.643522] FS-Cache: Duplicate cookie detected
	[  +0.000722] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000950] FS-Cache: O-cookie d=0000000095b1c4f2{9p.inode} n=0000000025d35af9
	[  +0.001072] FS-Cache: O-key=[8] 'ef643b0000000000'
	[  +0.000702] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000923] FS-Cache: N-cookie d=0000000095b1c4f2{9p.inode} n=00000000b6a5d6f3
	[  +0.001033] FS-Cache: N-key=[8] 'ef643b0000000000'
	[  +0.394896] FS-Cache: Duplicate cookie detected
	[  +0.000697] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000948] FS-Cache: O-cookie d=0000000095b1c4f2{9p.inode} n=0000000075c0be8d
	[  +0.001224] FS-Cache: O-key=[8] 'f5643b0000000000'
	[  +0.000702] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000922] FS-Cache: N-cookie d=0000000095b1c4f2{9p.inode} n=00000000a3dadce9
	[  +0.001033] FS-Cache: N-key=[8] 'f5643b0000000000'
	
	* 
	* ==> etcd [96e97f44359c] <==
	* raft2023/08/30 23:04:47 INFO: aec36adc501070cc became follower at term 0
	raft2023/08/30 23:04:47 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/30 23:04:47 INFO: aec36adc501070cc became follower at term 1
	raft2023/08/30 23:04:47 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-30 23:04:47.904950 W | auth: simple token is not cryptographically signed
	2023-08-30 23:04:48.077522 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-30 23:04:48.094833 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/30 23:04:48 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-30 23:04:48.128747 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-08-30 23:04:48.129768 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-30 23:04:48.130227 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-08-30 23:04:48.130396 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/08/30 23:04:48 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/08/30 23:04:48 INFO: aec36adc501070cc became candidate at term 2
	raft2023/08/30 23:04:48 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/08/30 23:04:48 INFO: aec36adc501070cc became leader at term 2
	raft2023/08/30 23:04:48 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-08-30 23:04:48.741365 I | embed: ready to serve client requests
	2023-08-30 23:04:48.741630 I | embed: ready to serve client requests
	2023-08-30 23:04:48.742993 I | embed: serving client requests on 192.168.49.2:2379
	2023-08-30 23:04:48.743251 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-30 23:04:48.743364 I | embed: serving client requests on 127.0.0.1:2379
	2023-08-30 23:04:48.743512 I | etcdserver: published {Name:ingress-addon-legacy-211142 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-08-30 23:04:48.760373 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-30 23:04:48.760503 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  23:06:58 up  7:49,  0 users,  load average: 1.43, 2.05, 2.75
	Linux ingress-addon-legacy-211142 5.15.0-1043-aws #48~20.04.1-Ubuntu SMP Wed Aug 16 18:32:42 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [09c474d13545] <==
	* I0830 23:04:52.549627       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0830 23:04:52.555582       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0830 23:04:52.643588       1 cache.go:39] Caches are synced for autoregister controller
	I0830 23:04:52.644516       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0830 23:04:52.644522       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0830 23:04:52.644536       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0830 23:04:52.650265       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0830 23:04:53.437473       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0830 23:04:53.437677       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0830 23:04:53.445951       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0830 23:04:53.453029       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0830 23:04:53.453061       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0830 23:04:53.911699       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0830 23:04:53.954007       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0830 23:04:54.063642       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0830 23:04:54.064701       1 controller.go:609] quota admission added evaluator for: endpoints
	I0830 23:04:54.068557       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0830 23:04:54.888461       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0830 23:04:55.822065       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0830 23:04:55.903499       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0830 23:04:59.362163       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0830 23:05:11.653471       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0830 23:05:11.738243       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0830 23:05:51.828236       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0830 23:06:11.934151       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [19417b312195] <==
	* I0830 23:05:11.698668       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"fca48b76-6936-4dd8-9c4e-cb9b86bab3de", APIVersion:"apps/v1", ResourceVersion:"217", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-plq5r
	I0830 23:05:11.726255       1 shared_informer.go:230] Caches are synced for deployment 
	I0830 23:05:11.758493       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"7a03f2e1-c079-4a7f-a575-394cf1ac6175", APIVersion:"apps/v1", ResourceVersion:"210", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0830 23:05:11.765584       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2517f1a8-f933-4cf0-aa5a-213ecce00577", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-f29vx
	I0830 23:05:11.777030       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2517f1a8-f933-4cf0-aa5a-213ecce00577", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-x8fxm
	I0830 23:05:11.899040       1 shared_informer.go:230] Caches are synced for disruption 
	I0830 23:05:11.899065       1 disruption.go:339] Sending events to api server.
	I0830 23:05:11.925008       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"7a03f2e1-c079-4a7f-a575-394cf1ac6175", APIVersion:"apps/v1", ResourceVersion:"352", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0830 23:05:11.927988       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0830 23:05:11.977291       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0830 23:05:11.988991       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"2517f1a8-f933-4cf0-aa5a-213ecce00577", APIVersion:"apps/v1", ResourceVersion:"353", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-f29vx
	I0830 23:05:12.012046       1 shared_informer.go:230] Caches are synced for resource quota 
	I0830 23:05:12.033957       1 shared_informer.go:230] Caches are synced for HPA 
	I0830 23:05:12.078495       1 shared_informer.go:230] Caches are synced for resource quota 
	I0830 23:05:12.100175       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0830 23:05:12.182052       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0830 23:05:12.182074       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0830 23:05:51.808310       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c0970aba-266c-4997-863a-02c9bf2ff4fd", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0830 23:05:51.819414       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"900de310-5823-48a4-af2c-f0e0209607b6", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-wf7ss
	I0830 23:05:51.859548       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"dd6f1759-c491-4319-95f4-373c79abd498", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-stmbz
	I0830 23:05:51.946772       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"cfb8424e-9325-44eb-8fe6-26c3c54e495c", APIVersion:"batch/v1", ResourceVersion:"490", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-rndgj
	I0830 23:05:55.145539       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"dd6f1759-c491-4319-95f4-373c79abd498", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0830 23:05:56.172365       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"cfb8424e-9325-44eb-8fe6-26c3c54e495c", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0830 23:06:22.646394       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"5f6c3ce7-86c5-4da2-a878-eb39cefac6e2", APIVersion:"apps/v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0830 23:06:22.667508       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"5e4008bf-34d7-41de-8b86-fb6e886fa620", APIVersion:"apps/v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-dhtw4
	
	* 
	* ==> kube-proxy [af970957778d] <==
	* W0830 23:05:12.961997       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0830 23:05:12.992019       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0830 23:05:12.992065       1 server_others.go:186] Using iptables Proxier.
	I0830 23:05:12.992372       1 server.go:583] Version: v1.18.20
	I0830 23:05:12.993441       1 config.go:315] Starting service config controller
	I0830 23:05:12.993528       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0830 23:05:12.993664       1 config.go:133] Starting endpoints config controller
	I0830 23:05:12.993692       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0830 23:05:13.094207       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0830 23:05:13.094301       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [22bf58bf8705] <==
	* W0830 23:04:52.608800       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0830 23:04:52.646871       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0830 23:04:52.646898       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0830 23:04:52.648889       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0830 23:04:52.648986       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 23:04:52.648995       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0830 23:04:52.649015       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0830 23:04:52.660090       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0830 23:04:52.660400       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 23:04:52.660600       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0830 23:04:52.660779       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 23:04:52.660958       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 23:04:52.661138       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0830 23:04:52.661300       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0830 23:04:52.661473       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0830 23:04:52.661633       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0830 23:04:52.661794       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 23:04:52.661950       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0830 23:04:52.662114       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0830 23:04:53.497394       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0830 23:04:53.603561       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0830 23:04:53.625867       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0830 23:04:53.632758       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0830 23:04:53.633719       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0830 23:04:56.749160       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Aug 30 23:06:27 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:27.588518    2865 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 73f0c94eedb6ebbc4b5c9c8aedac15f2b59f28453b8ef74acb3c14eec3b187c3
	Aug 30 23:06:27 ingress-addon-legacy-211142 kubelet[2865]: E0830 23:06:27.588770    2865 pod_workers.go:191] Error syncing pod 26e37eae-7032-4c9a-a258-e65b0ed0b18a ("kube-ingress-dns-minikube_kube-system(26e37eae-7032-4c9a-a258-e65b0ed0b18a)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(26e37eae-7032-4c9a-a258-e65b0ed0b18a)"
	Aug 30 23:06:38 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:38.530380    2865 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-9vrjl" (UniqueName: "kubernetes.io/secret/26e37eae-7032-4c9a-a258-e65b0ed0b18a-minikube-ingress-dns-token-9vrjl") pod "26e37eae-7032-4c9a-a258-e65b0ed0b18a" (UID: "26e37eae-7032-4c9a-a258-e65b0ed0b18a")
	Aug 30 23:06:38 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:38.534587    2865 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e37eae-7032-4c9a-a258-e65b0ed0b18a-minikube-ingress-dns-token-9vrjl" (OuterVolumeSpecName: "minikube-ingress-dns-token-9vrjl") pod "26e37eae-7032-4c9a-a258-e65b0ed0b18a" (UID: "26e37eae-7032-4c9a-a258-e65b0ed0b18a"). InnerVolumeSpecName "minikube-ingress-dns-token-9vrjl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 23:06:38 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:38.630707    2865 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-9vrjl" (UniqueName: "kubernetes.io/secret/26e37eae-7032-4c9a-a258-e65b0ed0b18a-minikube-ingress-dns-token-9vrjl") on node "ingress-addon-legacy-211142" DevicePath ""
	Aug 30 23:06:39 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:39.436714    2865 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9586b1220bc4b7b1850df99e420d69251af62aab5fa1814b18436cddb16d0b89
	Aug 30 23:06:39 ingress-addon-legacy-211142 kubelet[2865]: W0830 23:06:39.630195    2865 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod79343f41-9b61-4c83-aa65-0b16cc80bed6/73cfd0a607e6ff9a430850b5b3999fa6b9d818d5d8ae972b9d7070e25bcb0893": none of the resources are being tracked.
	Aug 30 23:06:39 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:39.679767    2865 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 73f0c94eedb6ebbc4b5c9c8aedac15f2b59f28453b8ef74acb3c14eec3b187c3
	Aug 30 23:06:39 ingress-addon-legacy-211142 kubelet[2865]: W0830 23:06:39.683143    2865 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-dhtw4 through plugin: invalid network status for
	Aug 30 23:06:39 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:39.689590    2865 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 73cfd0a607e6ff9a430850b5b3999fa6b9d818d5d8ae972b9d7070e25bcb0893
	Aug 30 23:06:39 ingress-addon-legacy-211142 kubelet[2865]: E0830 23:06:39.689967    2865 pod_workers.go:191] Error syncing pod 79343f41-9b61-4c83-aa65-0b16cc80bed6 ("hello-world-app-5f5d8b66bb-dhtw4_default(79343f41-9b61-4c83-aa65-0b16cc80bed6)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-dhtw4_default(79343f41-9b61-4c83-aa65-0b16cc80bed6)"
	Aug 30 23:06:39 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:39.697487    2865 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9586b1220bc4b7b1850df99e420d69251af62aab5fa1814b18436cddb16d0b89
	Aug 30 23:06:40 ingress-addon-legacy-211142 kubelet[2865]: W0830 23:06:40.701223    2865 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-dhtw4 through plugin: invalid network status for
	Aug 30 23:06:50 ingress-addon-legacy-211142 kubelet[2865]: E0830 23:06:50.310860    2865 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wf7ss.17804bce8801e9c4", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wf7ss", UID:"02fc35e1-8c84-44ca-a64d-bb22e024295f", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-211142"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13410c2926305c4, ext:114557284608, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13410c2926305c4, ext:114557284608, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wf7ss.17804bce8801e9c4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 30 23:06:50 ingress-addon-legacy-211142 kubelet[2865]: E0830 23:06:50.336119    2865 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-wf7ss.17804bce8801e9c4", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-wf7ss", UID:"02fc35e1-8c84-44ca-a64d-bb22e024295f", APIVersion:"v1", ResourceVersion:"477", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-211142"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13410c2926305c4, ext:114557284608, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13410c292da7156, ext:114565110930, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-wf7ss.17804bce8801e9c4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 30 23:06:51 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:51.437418    2865 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 73cfd0a607e6ff9a430850b5b3999fa6b9d818d5d8ae972b9d7070e25bcb0893
	Aug 30 23:06:51 ingress-addon-legacy-211142 kubelet[2865]: E0830 23:06:51.438589    2865 pod_workers.go:191] Error syncing pod 79343f41-9b61-4c83-aa65-0b16cc80bed6 ("hello-world-app-5f5d8b66bb-dhtw4_default(79343f41-9b61-4c83-aa65-0b16cc80bed6)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-dhtw4_default(79343f41-9b61-4c83-aa65-0b16cc80bed6)"
	Aug 30 23:06:52 ingress-addon-legacy-211142 kubelet[2865]: W0830 23:06:52.806698    2865 pod_container_deletor.go:77] Container "6b27f7e0fff73bef34da005afeb8c03c6256dafc052735ccc54b1fd0a69a0a77" not found in pod's containers
	Aug 30 23:06:54 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:54.467862    2865 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/02fc35e1-8c84-44ca-a64d-bb22e024295f-webhook-cert") pod "02fc35e1-8c84-44ca-a64d-bb22e024295f" (UID: "02fc35e1-8c84-44ca-a64d-bb22e024295f")
	Aug 30 23:06:54 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:54.467916    2865 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-mxn62" (UniqueName: "kubernetes.io/secret/02fc35e1-8c84-44ca-a64d-bb22e024295f-ingress-nginx-token-mxn62") pod "02fc35e1-8c84-44ca-a64d-bb22e024295f" (UID: "02fc35e1-8c84-44ca-a64d-bb22e024295f")
	Aug 30 23:06:54 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:54.474311    2865 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02fc35e1-8c84-44ca-a64d-bb22e024295f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "02fc35e1-8c84-44ca-a64d-bb22e024295f" (UID: "02fc35e1-8c84-44ca-a64d-bb22e024295f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 23:06:54 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:54.474455    2865 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02fc35e1-8c84-44ca-a64d-bb22e024295f-ingress-nginx-token-mxn62" (OuterVolumeSpecName: "ingress-nginx-token-mxn62") pod "02fc35e1-8c84-44ca-a64d-bb22e024295f" (UID: "02fc35e1-8c84-44ca-a64d-bb22e024295f"). InnerVolumeSpecName "ingress-nginx-token-mxn62". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 30 23:06:54 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:54.568277    2865 reconciler.go:319] Volume detached for volume "ingress-nginx-token-mxn62" (UniqueName: "kubernetes.io/secret/02fc35e1-8c84-44ca-a64d-bb22e024295f-ingress-nginx-token-mxn62") on node "ingress-addon-legacy-211142" DevicePath ""
	Aug 30 23:06:54 ingress-addon-legacy-211142 kubelet[2865]: I0830 23:06:54.568471    2865 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/02fc35e1-8c84-44ca-a64d-bb22e024295f-webhook-cert") on node "ingress-addon-legacy-211142" DevicePath ""
	Aug 30 23:06:55 ingress-addon-legacy-211142 kubelet[2865]: W0830 23:06:55.455857    2865 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/02fc35e1-8c84-44ca-a64d-bb22e024295f/volumes" does not exist
	
	* 
	* ==> storage-provisioner [a7ba58708a03] <==
	* I0830 23:05:16.908242       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0830 23:05:16.924002       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0830 23:05:16.924232       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0830 23:05:16.930888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0830 23:05:16.931350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-211142_a3c20520-92c6-4f62-80a9-790cef026230!
	I0830 23:05:16.932528       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"05f40c3c-3bf7-4738-8db2-bf776ba87b18", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-211142_a3c20520-92c6-4f62-80a9-790cef026230 became leader
	I0830 23:05:17.031685       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-211142_a3c20520-92c6-4f62-80a9-790cef026230!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-211142 -n ingress-addon-legacy-211142
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-211142 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (454.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.2819877702.exe start -p running-upgrade-039167 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:132: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.2819877702.exe start -p running-upgrade-039167 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (47.882762945s)

                                                
                                                
-- stdout --
	* [running-upgrade-039167] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig387995393
	* minikube 1.31.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.31.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Using the docker driver based on user configuration
	* Starting control plane node running-upgrade-039167 in cluster running-upgrade-039167
	* Pulling base image ...
	* Downloading Kubernetes v1.20.2 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 4.00 MiB / 514.92 MiB [>__] 0.78% ? p/s ?    > preloaded-images-k8s-v8-v1....: 8.00 MiB / 514.92 MiB [>__] 1.55% ? p/s ?    > preloaded-images-k8s-v8-v1....: 13.47 MiB / 514.92 MiB [>_] 2.62% ? p/s ?    > preloaded-images-k8s-v8-v1....: 18.33 MiB / 514.92 MiB  3.56% 23.88 MiB p    > preloaded-images-k8s-v8-v1....: 25.20 MiB / 514.92 MiB  4.89% 23.88 MiB p    > preloaded-images-k8s-v8-v1....: 32.50 MiB / 514.92 MiB  6.31% 23.88 MiB p    > preloaded-images-k8s-v8-v1....: 40.00 MiB / 514.92 MiB  7.77% 24.67 MiB p    > preloaded-images-k8s-v8-v1....: 48.00 MiB / 514.92 MiB  9.32% 24.67 MiB p    > preloaded-images-k8s-v8-v1....: 56.00 MiB / 514.92 MiB  10.88% 24.67 MiB     > preloaded-images-k8s-v8-v1....: 62.95 MiB / 514.92 MiB  12.23% 25.55 MiB     > preloaded-images-k8s-v8-v1....: 71.43 MiB / 514.92 MiB  13.87% 25.55 MiB     > preloaded-images-k8s-v8-v1....: 75.91 MiB / 514.92 MiB  14.74% 25.55 MiB     > preloaded-images-k8s-v8-v1....: 83.10 MiB / 514.92 MiB  16.14
% 26.06 MiB     > preloaded-images-k8s-v8-v1....: 88.78 MiB / 514.92 MiB  17.24% 26.06 MiB     > preloaded-images-k8s-v8-v1....: 96.28 MiB / 514.92 MiB  18.70% 26.06 MiB     > preloaded-images-k8s-v8-v1....: 104.00 MiB / 514.92 MiB  20.20% 26.63 MiB    > preloaded-images-k8s-v8-v1....: 112.00 MiB / 514.92 MiB  21.75% 26.63 MiB    > preloaded-images-k8s-v8-v1....: 120.00 MiB / 514.92 MiB  23.30% 26.63 MiB    > preloaded-images-k8s-v8-v1....: 126.20 MiB / 514.92 MiB  24.51% 27.30 MiB    > preloaded-images-k8s-v8-v1....: 131.69 MiB / 514.92 MiB  25.58% 27.30 MiB    > preloaded-images-k8s-v8-v1....: 139.49 MiB / 514.92 MiB  27.09% 27.30 MiB    > preloaded-images-k8s-v8-v1....: 144.47 MiB / 514.92 MiB  28.06% 27.50 MiB    > preloaded-images-k8s-v8-v1....: 152.00 MiB / 514.92 MiB  29.52% 27.50 MiB    > preloaded-images-k8s-v8-v1....: 159.81 MiB / 514.92 MiB  31.03% 27.50 MiB    > preloaded-images-k8s-v8-v1....: 168.00 MiB / 514.92 MiB  32.63% 28.26 MiB    > preloaded-images-k8s-v8-v1....: 175.54 MiB / 514.92 MiB  3
4.09% 28.26 MiB    > preloaded-images-k8s-v8-v1....: 184.00 MiB / 514.92 MiB  35.73% 28.26 MiB    > preloaded-images-k8s-v8-v1....: 192.00 MiB / 514.92 MiB  37.29% 29.02 MiB    > preloaded-images-k8s-v8-v1....: 199.16 MiB / 514.92 MiB  38.68% 29.02 MiB    > preloaded-images-k8s-v8-v1....: 207.38 MiB / 514.92 MiB  40.27% 29.02 MiB    > preloaded-images-k8s-v8-v1....: 216.00 MiB / 514.92 MiB  41.95% 29.72 MiB    > preloaded-images-k8s-v8-v1....: 224.00 MiB / 514.92 MiB  43.50% 29.72 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 514.92 MiB  45.06% 29.72 MiB    > preloaded-images-k8s-v8-v1....: 240.00 MiB / 514.92 MiB  46.61% 30.39 MiB    > preloaded-images-k8s-v8-v1....: 243.74 MiB / 514.92 MiB  47.34% 30.39 MiB    > preloaded-images-k8s-v8-v1....: 250.99 MiB / 514.92 MiB  48.74% 30.39 MiB    > preloaded-images-k8s-v8-v1....: 259.61 MiB / 514.92 MiB  50.42% 30.54 MiB    > preloaded-images-k8s-v8-v1....: 271.27 MiB / 514.92 MiB  52.68% 30.54 MiB    > preloaded-images-k8s-v8-v1....: 280.00 MiB / 514.92 MiB
54.38% 30.54 MiB    > preloaded-images-k8s-v8-v1....: 288.08 MiB / 514.92 MiB  55.95% 31.63 MiB    > preloaded-images-k8s-v8-v1....: 296.00 MiB / 514.92 MiB  57.48% 31.63 MiB    > preloaded-images-k8s-v8-v1....: 306.64 MiB / 514.92 MiB  59.55% 31.63 MiB    > preloaded-images-k8s-v8-v1....: 312.98 MiB / 514.92 MiB  60.78% 32.26 MiB    > preloaded-images-k8s-v8-v1....: 320.17 MiB / 514.92 MiB  62.18% 32.26 MiB    > preloaded-images-k8s-v8-v1....: 326.78 MiB / 514.92 MiB  63.46% 32.26 MiB    > preloaded-images-k8s-v8-v1....: 336.00 MiB / 514.92 MiB  65.25% 32.66 MiB    > preloaded-images-k8s-v8-v1....: 344.00 MiB / 514.92 MiB  66.81% 32.66 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 514.92 MiB  68.36% 32.66 MiB    > preloaded-images-k8s-v8-v1....: 358.13 MiB / 514.92 MiB  69.55% 32.93 MiB    > preloaded-images-k8s-v8-v1....: 368.00 MiB / 514.92 MiB  71.47% 32.93 MiB    > preloaded-images-k8s-v8-v1....: 373.85 MiB / 514.92 MiB  72.60% 32.93 MiB    > preloaded-images-k8s-v8-v1....: 384.00 MiB / 514.92
MiB  74.57% 33.59 MiB    > preloaded-images-k8s-v8-v1....: 392.00 MiB / 514.92 MiB  76.13% 33.59 MiB    > preloaded-images-k8s-v8-v1....: 400.00 MiB / 514.92 MiB  77.68% 33.59 MiB    > preloaded-images-k8s-v8-v1....: 409.84 MiB / 514.92 MiB  79.59% 34.20 MiB    > preloaded-images-k8s-v8-v1....: 416.53 MiB / 514.92 MiB  80.89% 34.20 MiB    > preloaded-images-k8s-v8-v1....: 428.61 MiB / 514.92 MiB  83.24% 34.20 MiB    > preloaded-images-k8s-v8-v1....: 435.22 MiB / 514.92 MiB  84.52% 34.72 MiB    > preloaded-images-k8s-v8-v1....: 445.71 MiB / 514.92 MiB  86.56% 34.72 MiB    > preloaded-images-k8s-v8-v1....: 456.00 MiB / 514.92 MiB  88.56% 34.72 MiB    > preloaded-images-k8s-v8-v1....: 464.00 MiB / 514.92 MiB  90.11% 35.58 MiB    > preloaded-images-k8s-v8-v1....: 472.00 MiB / 514.92 MiB  91.66% 35.58 MiB    > preloaded-images-k8s-v8-v1....: 480.00 MiB / 514.92 MiB  93.22% 35.58 MiB    > preloaded-images-k8s-v8-v1....: 488.00 MiB / 514.92 MiB  94.77% 35.86 MiB    > preloaded-images-k8s-v8-v1....: 496.00 MiB / 514.
92 MiB  96.33% 35.86 MiB    > preloaded-images-k8s-v8-v1....: 504.00 MiB / 514.92 MiB  97.88% 35.86 MiB    > preloaded-images-k8s-v8-v1....: 514.92 MiB / 514.92 MiB  100.00% 39.15 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.2819877702.exe start -p running-upgrade-039167 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0830 23:31:02.227476 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:31:39.103390 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:31:54.038646 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:54.044645 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:54.054880 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:54.075135 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:54.115426 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:54.195692 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:54.356121 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:54.676571 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:55.317554 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:56.597788 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:31:59.158884 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:32:04.280047 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:32:14.520214 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:32:25.342414 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:32:35.000879 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.2819877702.exe start -p running-upgrade-039167 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (3m21.144722566s)

                                                
                                                
-- stdout --
	* [running-upgrade-039167] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig3585560046
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-039167 in cluster running-upgrade-039167
	* Pulling base image ...
	* docker "running-upgrade-039167" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.2819877702.exe start -p running-upgrade-039167 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:132: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.2819877702.exe start -p running-upgrade-039167 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (3m20.616994518s)

                                                
                                                
-- stdout --
	* [running-upgrade-039167] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1177206855
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-039167 in cluster running-upgrade-039167
	* Pulling base image ...
	* docker "running-upgrade-039167" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:138: legacy v1.17.0 start failed: exit status 80
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-30 23:37:17.772436352 +0000 UTC m=+2609.889515149
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-039167
helpers_test.go:235: (dbg) docker inspect running-upgrade-039167:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1ab3d6ccde05fda65169d8b70d887ae6cc2ebe27e9a540b5e682f45173845dc",
	        "Created": "2023-08-30T23:37:09.763283554Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "created",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 128,
	            "Error": "Address already in use",
	            "StartedAt": "0001-01-01T00:00:00Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "",
	        "HostnamePath": "",
	        "HostsPath": "",
	        "LogPath": "",
	        "Name": "/running-upgrade-039167",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-039167:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-039167",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da6a6e046c4776aee84ff5e84ab5ac2960ec20c9edcf8784979c34dcd35dc623-init/diff:/var/lib/docker/overlay2/7b9fb767199183279157065ff1de58eadd4310113d17b2d2961e7f1092dd2b7c/diff:/var/lib/docker/overlay2/f2e1c74660a3da84ac4016a83f8039ffeb1ff8131dade30700c209a0365b35c0/diff:/var/lib/docker/overlay2/62a2af41ef911e0cca696f631be6c930c14853356470b6bb25d86ed2b02da3b7/diff:/var/lib/docker/overlay2/f840d6addd9fc464eec4f5a0a7379be295dfa87015428ee9766f23907246a237/diff:/var/lib/docker/overlay2/996359d1f547e3f3286071e2a1bb4fa8e6b0e735bdc9a03237ce130930672f65/diff:/var/lib/docker/overlay2/de13536286dcf94b31426af287c22245c9378270b6368397ddbfaa04e7818d83/diff:/var/lib/docker/overlay2/7d36a74d767ea2e5944028833692abf9871cd0d252e922e3aba7c22759e02d00/diff:/var/lib/docker/overlay2/67cffa6842181db8900295a8b1f5b44cfe438c17762ce526e93d9fc27378a7e8/diff:/var/lib/docker/overlay2/f69c952771e0ac3dc51e58e7bad83df1a70394828544a9c358c59bd97e5bfb1c/diff:/var/lib/docker/overlay2/244096
e09b63958f2b2eb2b9471c25272d29111b8b5f17ae4f2cb88b736d0b50/diff:/var/lib/docker/overlay2/86838d89a7d13442781b2eac44706520a45aa436adbe3ff1b2c8d5dd5c558f91/diff:/var/lib/docker/overlay2/5a66849afeae43d00220407b1f2c17f8308c2715db7c77aedcc280c2b60dcf19/diff:/var/lib/docker/overlay2/d5c09fc7d5b47e160319df7e60f298e74d51158e13a5f3624f226df00bb1b5aa/diff:/var/lib/docker/overlay2/a43c11959d596945f4ade35ad33d7a87d48bf825af95b3172ec160b8b4d306d3/diff:/var/lib/docker/overlay2/260b96d6ae4702c9d4e86c1b415558ae81ce0beefa9100793720e24557b3c678/diff:/var/lib/docker/overlay2/9dd020902e335cf02c82aaaee236acd61f28da568677e39b166074eb966c2778/diff:/var/lib/docker/overlay2/8c8d7d1600197a80b8e015ec929b00985b867538e26b96d698f41c472353658d/diff:/var/lib/docker/overlay2/18e9f2876aeeb0e8b0dcd490a6cbbf814fd47a77ff4db21366f59dbf7fe50aa5/diff:/var/lib/docker/overlay2/927727a649646054a140ed4d4cb9b9742fdf254d8fa7a9d09649628c16cf1ba2/diff:/var/lib/docker/overlay2/1fd51321530c2146cd1397125f3c091c73763229f0140697c4c4c6ecbd9d78b3/diff:/var/lib/d
ocker/overlay2/570baf76c05ac58306205223b41b007b96bed8adcc795381281cb00975cb7418/diff:/var/lib/docker/overlay2/a6df3afb0f96f5d239307c8008381134aa8bf2f337293f1795d7a49b8b6b0f2e/diff:/var/lib/docker/overlay2/7ead4336061813d67df2c6949d615083e66ba0adfd5912a9a9d657832011ba0c/diff:/var/lib/docker/overlay2/fd5e897d7db65b0efca6d6071c9f2d7f03f0bdc9aae60dc8dd8eb4840b4a5729/diff:/var/lib/docker/overlay2/276240e2bd85702dd85be16724e674af4c36a02b53ea5275a749a6fae1dfb198/diff:/var/lib/docker/overlay2/29a8ffdef9aed3118608d8ac03d51feac13f0b52c17a10f8082794c3b82cca3e/diff:/var/lib/docker/overlay2/ee4df6b2bbf36a95e217422243f370f4e674186a0b187602b4bb66dad24a3ea3/diff:/var/lib/docker/overlay2/a0e9eaf7ff11a598f400801949821b25783b80152b51ee0f390e5c876285e0f7/diff:/var/lib/docker/overlay2/748dfc3fd29250bf881a7c87ec21cbc2c3266b037fd7eb297b8121f841464753/diff:/var/lib/docker/overlay2/87da456a44f07eaf0e91f65224c99674ab01c7a1be56543af4c476b0ea737431/diff:/var/lib/docker/overlay2/713f056b763f283de24614ef0762c85dbe6805a1b92f6aef056e11194c9
4e7a5/diff:/var/lib/docker/overlay2/8d11cb37f5089552fa5af9fcbe93d42a58f2a503bcecf121824fa74f9963b6b7/diff:/var/lib/docker/overlay2/26bb21845a24d752a7a45cf11469fe33029382ca67f7e5edaff3ced4f69b7ea1/diff:/var/lib/docker/overlay2/b8f06309be88ad382acd490f6087fafdf94cd983a364b8e3c2b0930c2e572008/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da6a6e046c4776aee84ff5e84ab5ac2960ec20c9edcf8784979c34dcd35dc623/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da6a6e046c4776aee84ff5e84ab5ac2960ec20c9edcf8784979c34dcd35dc623/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da6a6e046c4776aee84ff5e84ab5ac2960ec20c9edcf8784979c34dcd35dc623/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-039167",
	                "Source": "/var/lib/docker/volumes/running-upgrade-039167/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-039167",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-039167",
	                "name.minikube.sigs.k8s.io": "running-upgrade-039167",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-039167": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.59.0"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e1ab3d6ccde0",
	                        "running-upgrade-039167"
	                    ],
	                    "NetworkID": "",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-039167 -n running-upgrade-039167
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-039167 -n running-upgrade-039167: exit status 7 (78.644926ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "running-upgrade-039167" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "running-upgrade-039167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-039167
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-039167: (1.685939198s)
--- FAIL: TestRunningBinaryUpgrade (454.92s)

                                                
                                    

Test pass (292/319)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 15.99
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.1/json-events 8.72
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.6
20 TestOffline 61.39
22 TestAddons/Setup 144.61
24 TestAddons/parallel/Registry 16.23
26 TestAddons/parallel/InspektorGadget 10.81
27 TestAddons/parallel/MetricsServer 5.9
30 TestAddons/parallel/CSI 62.32
31 TestAddons/parallel/Headlamp 11.64
32 TestAddons/parallel/CloudSpanner 5.72
35 TestAddons/serial/GCPAuth/Namespaces 0.27
36 TestAddons/StoppedEnableDisable 11.3
37 TestCertOptions 42.93
38 TestCertExpiration 253.68
39 TestDockerFlags 42.34
40 TestForceSystemdFlag 44.24
41 TestForceSystemdEnv 44.61
47 TestErrorSpam/setup 33.19
48 TestErrorSpam/start 0.84
49 TestErrorSpam/status 1.1
50 TestErrorSpam/pause 1.41
51 TestErrorSpam/unpause 1.57
52 TestErrorSpam/stop 1.44
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 96.14
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 36.71
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.1
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
64 TestFunctional/serial/CacheCmd/cache/add_local 0.98
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.07
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
69 TestFunctional/serial/CacheCmd/cache/delete 0.13
70 TestFunctional/serial/MinikubeKubectlCmd 0.15
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
72 TestFunctional/serial/ExtraConfig 40.25
73 TestFunctional/serial/ComponentHealth 0.1
74 TestFunctional/serial/LogsCmd 1.27
75 TestFunctional/serial/LogsFileCmd 1.34
76 TestFunctional/serial/InvalidService 4.92
78 TestFunctional/parallel/ConfigCmd 0.54
79 TestFunctional/parallel/DashboardCmd 12.24
80 TestFunctional/parallel/DryRun 0.55
81 TestFunctional/parallel/InternationalLanguage 0.36
82 TestFunctional/parallel/StatusCmd 1.25
86 TestFunctional/parallel/ServiceCmdConnect 7.77
87 TestFunctional/parallel/AddonsCmd 0.22
88 TestFunctional/parallel/PersistentVolumeClaim 25.92
90 TestFunctional/parallel/SSHCmd 0.77
91 TestFunctional/parallel/CpCmd 1.57
93 TestFunctional/parallel/FileSync 0.36
94 TestFunctional/parallel/CertSync 2.38
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
102 TestFunctional/parallel/License 0.49
103 TestFunctional/parallel/Version/short 0.09
104 TestFunctional/parallel/Version/components 0.84
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.44
109 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
110 TestFunctional/parallel/ImageCommands/Setup 2.77
111 TestFunctional/parallel/DockerEnv/bash 1.41
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.44
116 TestFunctional/parallel/ServiceCmd/DeployApp 11.36
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.91
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.22
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.05
120 TestFunctional/parallel/ServiceCmd/List 0.48
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
124 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.45
125 TestFunctional/parallel/ServiceCmd/Format 0.59
126 TestFunctional/parallel/ServiceCmd/URL 0.51
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.36
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.76
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.41
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
140 TestFunctional/parallel/ProfileCmd/profile_list 0.43
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
142 TestFunctional/parallel/MountCmd/any-port 7.61
143 TestFunctional/parallel/MountCmd/specific-port 2.6
144 TestFunctional/parallel/MountCmd/VerifyCleanup 3
145 TestFunctional/delete_addon-resizer_images 0.08
146 TestFunctional/delete_my-image_image 0.02
147 TestFunctional/delete_minikube_cached_images 0.02
151 TestImageBuild/serial/Setup 32.78
152 TestImageBuild/serial/NormalBuild 1.81
153 TestImageBuild/serial/BuildWithBuildArg 0.94
154 TestImageBuild/serial/BuildWithDockerIgnore 0.72
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
158 TestIngressAddonLegacy/StartLegacyK8sCluster 109.47
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.44
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.7
165 TestJSONOutput/start/Command 82.25
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.61
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.58
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 10.92
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.24
190 TestKicCustomNetwork/create_custom_network 36.2
191 TestKicCustomNetwork/use_default_bridge_network 34.38
192 TestKicExistingNetwork 37.68
193 TestKicCustomSubnet 34.6
194 TestKicStaticIP 35.84
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 69.82
199 TestMountStart/serial/StartWithMountFirst 11.01
200 TestMountStart/serial/VerifyMountFirst 0.29
201 TestMountStart/serial/StartWithMountSecond 10.76
202 TestMountStart/serial/VerifyMountSecond 0.29
203 TestMountStart/serial/DeleteFirst 1.5
204 TestMountStart/serial/VerifyMountPostDelete 0.28
205 TestMountStart/serial/Stop 1.22
206 TestMountStart/serial/RestartStopped 9.54
207 TestMountStart/serial/VerifyMountPostStop 0.29
210 TestMultiNode/serial/FreshStart2Nodes 79.6
211 TestMultiNode/serial/DeployApp2Nodes 45.93
212 TestMultiNode/serial/PingHostFrom2Pods 1.17
213 TestMultiNode/serial/AddNode 20.77
214 TestMultiNode/serial/ProfileList 0.36
215 TestMultiNode/serial/CopyFile 11.37
216 TestMultiNode/serial/StopNode 2.4
217 TestMultiNode/serial/StartAfterStop 14.06
218 TestMultiNode/serial/RestartKeepsNodes 119.79
219 TestMultiNode/serial/DeleteNode 5.09
220 TestMultiNode/serial/StopMultiNode 21.82
221 TestMultiNode/serial/RestartMultiNode 83.25
222 TestMultiNode/serial/ValidateNameConflict 36.24
227 TestPreload 164.83
229 TestScheduledStopUnix 108.87
230 TestSkaffold 104.06
232 TestInsufficientStorage 11.05
235 TestKubernetesUpgrade 389.57
236 TestMissingContainerUpgrade 140.41
248 TestStoppedBinaryUpgrade/Setup 0.96
249 TestStoppedBinaryUpgrade/Upgrade 100.51
250 TestStoppedBinaryUpgrade/MinikubeLogs 1.67
252 TestPause/serial/Start 49.69
253 TestPause/serial/SecondStartNoReconfiguration 37.64
254 TestPause/serial/Pause 0.63
255 TestPause/serial/VerifyStatus 0.36
256 TestPause/serial/Unpause 0.57
257 TestPause/serial/PauseAgain 1.15
258 TestPause/serial/DeletePaused 2.12
259 TestPause/serial/VerifyDeletedResources 0.36
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
269 TestNoKubernetes/serial/StartWithK8s 39.54
270 TestNoKubernetes/serial/StartWithStopK8s 7.9
271 TestNoKubernetes/serial/Start 11.2
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.45
273 TestNoKubernetes/serial/ProfileList 3.8
274 TestNoKubernetes/serial/Stop 1.24
275 TestNoKubernetes/serial/StartNoArgs 8.95
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
277 TestNetworkPlugins/group/auto/Start 94.52
278 TestNetworkPlugins/group/kindnet/Start 69.97
279 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
280 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
281 TestNetworkPlugins/group/kindnet/NetCatPod 9.33
282 TestNetworkPlugins/group/kindnet/DNS 0.22
283 TestNetworkPlugins/group/kindnet/Localhost 0.19
284 TestNetworkPlugins/group/kindnet/HairPin 0.21
285 TestNetworkPlugins/group/auto/KubeletFlags 0.45
286 TestNetworkPlugins/group/auto/NetCatPod 12.52
287 TestNetworkPlugins/group/auto/DNS 0.27
288 TestNetworkPlugins/group/auto/Localhost 0.26
289 TestNetworkPlugins/group/auto/HairPin 0.27
290 TestNetworkPlugins/group/calico/Start 83.38
291 TestNetworkPlugins/group/custom-flannel/Start 70.48
292 TestNetworkPlugins/group/calico/ControllerPod 5.05
293 TestNetworkPlugins/group/calico/KubeletFlags 0.45
294 TestNetworkPlugins/group/calico/NetCatPod 11.69
295 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
296 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.45
297 TestNetworkPlugins/group/calico/DNS 0.23
298 TestNetworkPlugins/group/calico/Localhost 0.17
299 TestNetworkPlugins/group/calico/HairPin 0.2
300 TestNetworkPlugins/group/custom-flannel/DNS 0.22
301 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
302 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
303 TestNetworkPlugins/group/false/Start 93.28
304 TestNetworkPlugins/group/enable-default-cni/Start 54.57
305 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
306 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.33
307 TestNetworkPlugins/group/enable-default-cni/DNS 34.33
308 TestNetworkPlugins/group/false/KubeletFlags 0.34
309 TestNetworkPlugins/group/false/NetCatPod 9.38
310 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
311 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
312 TestNetworkPlugins/group/false/DNS 0.27
313 TestNetworkPlugins/group/false/Localhost 0.19
314 TestNetworkPlugins/group/false/HairPin 0.2
315 TestNetworkPlugins/group/flannel/Start 68.23
316 TestNetworkPlugins/group/bridge/Start 58.87
317 TestNetworkPlugins/group/bridge/KubeletFlags 0.48
318 TestNetworkPlugins/group/bridge/NetCatPod 12.5
319 TestNetworkPlugins/group/flannel/ControllerPod 5.03
320 TestNetworkPlugins/group/bridge/DNS 0.22
321 TestNetworkPlugins/group/bridge/Localhost 0.2
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.46
323 TestNetworkPlugins/group/bridge/HairPin 0.25
324 TestNetworkPlugins/group/flannel/NetCatPod 9.42
325 TestNetworkPlugins/group/flannel/DNS 0.23
326 TestNetworkPlugins/group/flannel/Localhost 0.26
327 TestNetworkPlugins/group/flannel/HairPin 0.31
328 TestNetworkPlugins/group/kubenet/Start 95.39
330 TestStartStop/group/old-k8s-version/serial/FirstStart 136.95
331 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
332 TestNetworkPlugins/group/kubenet/NetCatPod 10.31
333 TestNetworkPlugins/group/kubenet/DNS 0.2
334 TestNetworkPlugins/group/kubenet/Localhost 0.19
335 TestNetworkPlugins/group/kubenet/HairPin 0.18
337 TestStartStop/group/no-preload/serial/FirstStart 91.55
338 TestStartStop/group/old-k8s-version/serial/DeployApp 10.52
339 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
340 TestStartStop/group/old-k8s-version/serial/Stop 11.26
341 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
342 TestStartStop/group/old-k8s-version/serial/SecondStart 429.7
343 TestStartStop/group/no-preload/serial/DeployApp 9.5
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.33
345 TestStartStop/group/no-preload/serial/Stop 10.99
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
347 TestStartStop/group/no-preload/serial/SecondStart 320.67
348 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
349 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
350 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.38
351 TestStartStop/group/no-preload/serial/Pause 3.16
353 TestStartStop/group/embed-certs/serial/FirstStart 84.53
354 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
355 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
356 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
357 TestStartStop/group/old-k8s-version/serial/Pause 3.18
359 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 94.57
360 TestStartStop/group/embed-certs/serial/DeployApp 9.53
361 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
362 TestStartStop/group/embed-certs/serial/Stop 10.83
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
364 TestStartStop/group/embed-certs/serial/SecondStart 343.12
365 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.64
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.09
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 350.58
370 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.03
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.48
373 TestStartStop/group/embed-certs/serial/Pause 4.21
375 TestStartStop/group/newest-cni/serial/FirstStart 53.92
376 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.03
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
378 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.59
379 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.84
380 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.17
382 TestStartStop/group/newest-cni/serial/Stop 11.32
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
384 TestStartStop/group/newest-cni/serial/SecondStart 30.54
385 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
388 TestStartStop/group/newest-cni/serial/Pause 2.97
x
+
TestDownloadOnly/v1.16.0/json-events (15.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-850644 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-850644 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (15.989580055s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (15.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-850644
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-850644: exit status 85 (81.027207ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-850644 | jenkins | v1.31.2 | 30 Aug 23 22:53 UTC |          |
	|         | -p download-only-850644        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:53:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:53:47.993909 1502308 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:53:47.994151 1502308 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:53:47.994181 1502308 out.go:309] Setting ErrFile to fd 2...
	I0830 22:53:47.994200 1502308 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:53:47.994487 1502308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
	W0830 22:53:47.994633 1502308 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17114-1496922/.minikube/config/config.json: open /home/jenkins/minikube-integration/17114-1496922/.minikube/config/config.json: no such file or directory
	I0830 22:53:47.995039 1502308 out.go:303] Setting JSON to true
	I0830 22:53:47.995993 1502308 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27364,"bootTime":1693408664,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0830 22:53:47.996085 1502308 start.go:138] virtualization:  
	I0830 22:53:47.999575 1502308 out.go:97] [download-only-850644] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	W0830 22:53:47.999810 1502308 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball: no such file or directory
	I0830 22:53:48.001922 1502308 out.go:169] MINIKUBE_LOCATION=17114
	I0830 22:53:47.999950 1502308 notify.go:220] Checking for updates...
	I0830 22:53:48.003890 1502308 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:53:48.005691 1502308 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	I0830 22:53:48.007358 1502308 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	I0830 22:53:48.009186 1502308 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0830 22:53:48.012702 1502308 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0830 22:53:48.012934 1502308 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 22:53:48.037140 1502308 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 22:53:48.037269 1502308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:53:48.114389 1502308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-30 22:53:48.104889673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:53:48.114491 1502308 docker.go:294] overlay module found
	I0830 22:53:48.116468 1502308 out.go:97] Using the docker driver based on user configuration
	I0830 22:53:48.116489 1502308 start.go:298] selected driver: docker
	I0830 22:53:48.116495 1502308 start.go:902] validating driver "docker" against <nil>
	I0830 22:53:48.116596 1502308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:53:48.179715 1502308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-30 22:53:48.170838265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:53:48.179871 1502308 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0830 22:53:48.180143 1502308 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0830 22:53:48.180293 1502308 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0830 22:53:48.183024 1502308 out.go:169] Using Docker driver with root privileges
	I0830 22:53:48.184911 1502308 cni.go:84] Creating CNI manager for ""
	I0830 22:53:48.184940 1502308 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0830 22:53:48.184955 1502308 start_flags.go:319] config:
	{Name:download-only-850644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-850644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:53:48.187264 1502308 out.go:97] Starting control plane node download-only-850644 in cluster download-only-850644
	I0830 22:53:48.187306 1502308 cache.go:122] Beginning downloading kic base image for docker with docker
	I0830 22:53:48.189487 1502308 out.go:97] Pulling base image ...
	I0830 22:53:48.189509 1502308 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0830 22:53:48.189609 1502308 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local docker daemon
	I0830 22:53:48.205877 1502308 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec to local cache
	I0830 22:53:48.206053 1502308 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local cache directory
	I0830 22:53:48.206154 1502308 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec to local cache
	I0830 22:53:48.283779 1502308 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0830 22:53:48.283816 1502308 cache.go:57] Caching tarball of preloaded images
	I0830 22:53:48.284436 1502308 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0830 22:53:48.287904 1502308 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0830 22:53:48.287937 1502308 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0830 22:53:48.431321 1502308 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0830 22:53:52.893933 1502308 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec as a tarball
	I0830 22:53:55.962466 1502308 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0830 22:53:55.962561 1502308 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0830 22:53:56.823180 1502308 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0830 22:53:56.823529 1502308 profile.go:148] Saving config to /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/download-only-850644/config.json ...
	I0830 22:53:56.823561 1502308 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/download-only-850644/config.json: {Name:mk664cc3b879bc311278d67bf3d431ead6cc90fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0830 22:53:56.823746 1502308 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0830 22:53:56.824418 1502308 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-850644"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (8.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-850644 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-850644 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.720130348s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (8.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-850644
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-850644: exit status 85 (81.43021ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-850644 | jenkins | v1.31.2 | 30 Aug 23 22:53 UTC |          |
	|         | -p download-only-850644        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-850644 | jenkins | v1.31.2 | 30 Aug 23 22:54 UTC |          |
	|         | -p download-only-850644        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/30 22:54:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0830 22:54:04.069390 1502384 out.go:296] Setting OutFile to fd 1 ...
	I0830 22:54:04.069563 1502384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:54:04.069573 1502384 out.go:309] Setting ErrFile to fd 2...
	I0830 22:54:04.069580 1502384 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 22:54:04.069880 1502384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
	W0830 22:54:04.070011 1502384 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17114-1496922/.minikube/config/config.json: open /home/jenkins/minikube-integration/17114-1496922/.minikube/config/config.json: no such file or directory
	I0830 22:54:04.070238 1502384 out.go:303] Setting JSON to true
	I0830 22:54:04.071171 1502384 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27380,"bootTime":1693408664,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0830 22:54:04.071237 1502384 start.go:138] virtualization:  
	I0830 22:54:04.074980 1502384 out.go:97] [download-only-850644] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 22:54:04.077562 1502384 out.go:169] MINIKUBE_LOCATION=17114
	I0830 22:54:04.075303 1502384 notify.go:220] Checking for updates...
	I0830 22:54:04.082100 1502384 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 22:54:04.084090 1502384 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	I0830 22:54:04.086349 1502384 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	I0830 22:54:04.088874 1502384 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0830 22:54:04.093219 1502384 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0830 22:54:04.093750 1502384 config.go:182] Loaded profile config "download-only-850644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0830 22:54:04.093820 1502384 start.go:810] api.Load failed for download-only-850644: filestore "download-only-850644": Docker machine "download-only-850644" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0830 22:54:04.093929 1502384 driver.go:373] Setting default libvirt URI to qemu:///system
	W0830 22:54:04.093954 1502384 start.go:810] api.Load failed for download-only-850644: filestore "download-only-850644": Docker machine "download-only-850644" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0830 22:54:04.117006 1502384 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 22:54:04.117083 1502384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:54:04.197567 1502384 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-30 22:54:04.188171133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:54:04.197678 1502384 docker.go:294] overlay module found
	I0830 22:54:04.199947 1502384 out.go:97] Using the docker driver based on existing profile
	I0830 22:54:04.199968 1502384 start.go:298] selected driver: docker
	I0830 22:54:04.199974 1502384 start.go:902] validating driver "docker" against &{Name:download-only-850644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-850644 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:54:04.200189 1502384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 22:54:04.277720 1502384 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-30 22:54:04.268527743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 22:54:04.278145 1502384 cni.go:84] Creating CNI manager for ""
	I0830 22:54:04.278166 1502384 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0830 22:54:04.278177 1502384 start_flags.go:319] config:
	{Name:download-only-850644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-850644 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 22:54:04.280335 1502384 out.go:97] Starting control plane node download-only-850644 in cluster download-only-850644
	I0830 22:54:04.280358 1502384 cache.go:122] Beginning downloading kic base image for docker with docker
	I0830 22:54:04.282112 1502384 out.go:97] Pulling base image ...
	I0830 22:54:04.282133 1502384 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 22:54:04.282302 1502384 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local docker daemon
	I0830 22:54:04.302033 1502384 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec to local cache
	I0830 22:54:04.302182 1502384 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local cache directory
	I0830 22:54:04.302205 1502384 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec in local cache directory, skipping pull
	I0830 22:54:04.302213 1502384 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec exists in cache, skipping pull
	I0830 22:54:04.302222 1502384 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec as a tarball
	I0830 22:54:04.344005 1502384 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0830 22:54:04.344031 1502384 cache.go:57] Caching tarball of preloaded images
	I0830 22:54:04.344201 1502384 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0830 22:54:04.346354 1502384 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0830 22:54:04.346376 1502384 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0830 22:54:04.472875 1502384 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4?checksum=md5:014fa2c9750ed18a91c50dffb6ed7aeb -> /home/jenkins/minikube-integration/17114-1496922/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-850644"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-850644
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-710377 --alsologtostderr --binary-mirror http://127.0.0.1:45655 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-710377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-710377
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (61.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-238504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-238504 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (59.06829159s)
helpers_test.go:175: Cleaning up "offline-docker-238504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-238504
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-238504: (2.321937319s)
--- PASS: TestOffline (61.39s)

                                                
                                    
x
+
TestAddons/Setup (144.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-435384 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-435384 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m24.612284404s)
--- PASS: TestAddons/Setup (144.61s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 34.118343ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9hsgm" [927dfbd6-c866-43ed-9e9a-de2830d7ff89] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.035017474s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-w2rth" [7f082a39-dfa1-47b1-ae76-2bfc5efbea4d] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014870993s
addons_test.go:316: (dbg) Run:  kubectl --context addons-435384 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-435384 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-435384 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.081404709s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 ip
2023/08/30 22:56:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cdsfj" [81464e58-c22f-48bd-bb50-89fba45164f6] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011562194s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-435384
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-435384: (5.797847137s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 4.888379ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-pdpjb" [cee10f1b-6f1c-408f-a38c-926f2b99bc1e] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014283096s
addons_test.go:391: (dbg) Run:  kubectl --context addons-435384 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 9.451457ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-435384 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-435384 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e84e68f9-77a7-443b-908c-475a44ff2185] Pending
helpers_test.go:344: "task-pv-pod" [e84e68f9-77a7-443b-908c-475a44ff2185] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e84e68f9-77a7-443b-908c-475a44ff2185] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.021308163s
addons_test.go:560: (dbg) Run:  kubectl --context addons-435384 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435384 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435384 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435384 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-435384 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-435384 delete pod task-pv-pod: (1.006286941s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-435384 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-435384 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435384 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-435384 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [77939bc6-b931-4008-9d6b-bd634f57d5a4] Pending
helpers_test.go:344: "task-pv-pod-restore" [77939bc6-b931-4008-9d6b-bd634f57d5a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [77939bc6-b931-4008-9d6b-bd634f57d5a4] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.017224038s
addons_test.go:602: (dbg) Run:  kubectl --context addons-435384 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-435384 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-435384 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-435384 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.692333427s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-435384 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-435384 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-435384 --alsologtostderr -v=1: (1.607866129s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-h4ng7" [5f545254-777d-42b7-a6a2-54bd94c8eb79] Pending
helpers_test.go:344: "headlamp-699c48fb74-h4ng7" [5f545254-777d-42b7-a6a2-54bd94c8eb79] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-h4ng7" [5f545254-777d-42b7-a6a2-54bd94c8eb79] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.029853747s
--- PASS: TestAddons/parallel/Headlamp (11.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-nr6q9" [7a37eec2-9365-4226-b164-929106a30df0] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.020933926s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-435384
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-435384 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-435384 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.27s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-435384
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-435384: (10.999044562s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-435384
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-435384
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-435384
--- PASS: TestAddons/StoppedEnableDisable (11.30s)

                                                
                                    
x
+
TestCertOptions (42.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-731881 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-731881 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (40.001327043s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-731881 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-731881 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-731881 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-731881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-731881
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-731881: (2.207696653s)
--- PASS: TestCertOptions (42.93s)

                                                
                                    
x
+
TestCertExpiration (253.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-852172 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-852172 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (44.445604486s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-852172 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-852172 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (27.056273011s)
helpers_test.go:175: Cleaning up "cert-expiration-852172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-852172
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-852172: (2.172593222s)
--- PASS: TestCertExpiration (253.68s)

                                                
                                    
x
+
TestDockerFlags (42.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-686081 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-686081 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.840295165s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-686081 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-686081 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-686081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-686081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-686081: (2.476445578s)
--- PASS: TestDockerFlags (42.34s)

                                                
                                    
x
+
TestForceSystemdFlag (44.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-770188 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-770188 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.201510187s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-770188 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-770188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-770188
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-770188: (2.443980739s)
--- PASS: TestForceSystemdFlag (44.24s)

                                                
                                    
x
+
TestForceSystemdEnv (44.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-799344 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0830 23:27:25.269888 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:27:25.342211 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-799344 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.96905333s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-799344 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-799344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-799344
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-799344: (2.230401856s)
--- PASS: TestForceSystemdEnv (44.61s)

                                                
                                    
x
+
TestErrorSpam/setup (33.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-654855 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-654855 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-654855 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-654855 --driver=docker  --container-runtime=docker: (33.192807758s)
--- PASS: TestErrorSpam/setup (33.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 stop: (1.233187197s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-654855 --log_dir /tmp/nospam-654855 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17114-1496922/.minikube/files/etc/test/nested/copy/1502303/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.14s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-489151 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-489151 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m36.140822716s)
--- PASS: TestFunctional/serial/StartWithProxy (96.14s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-489151 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-489151 --alsologtostderr -v=8: (36.713106029s)
functional_test.go:659: soft start took 36.713638854s for "functional-489151" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-489151 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 cache add registry.k8s.io/pause:3.1: (1.135618141s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 cache add registry.k8s.io/pause:3.3: (1.103829521s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 cache add registry.k8s.io/pause:latest: (1.009094581s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-489151 /tmp/TestFunctionalserialCacheCmdcacheadd_local3932711843/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 cache add minikube-local-cache-test:functional-489151
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 cache delete minikube-local-cache-test:functional-489151
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-489151
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (339.751288ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 kubectl -- --context functional-489151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-489151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-489151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0830 23:01:39.102625 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:39.109473 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:39.119791 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:39.140069 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:39.180327 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:39.260756 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:39.421162 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:39.741737 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:40.382654 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:41.662955 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:44.223601 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:49.343799 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:01:59.583933 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-489151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.249430363s)
functional_test.go:757: restart took 40.249533756s for "functional-489151" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-489151 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 logs: (1.268803807s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 logs --file /tmp/TestFunctionalserialLogsFileCmd4224206397/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 logs --file /tmp/TestFunctionalserialLogsFileCmd4224206397/001/logs.txt: (1.341643844s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-489151 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-489151
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-489151: exit status 115 (569.639021ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32029 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-489151 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-489151 delete -f testdata/invalidsvc.yaml: (1.03834666s)
--- PASS: TestFunctional/serial/InvalidService (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 config get cpus: exit status 14 (90.206046ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 config get cpus: exit status 14 (72.6082ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-489151 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-489151 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1541337: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-489151 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-489151 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (221.93144ms)

                                                
                                                
-- stdout --
	* [functional-489151] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 23:03:06.433614 1540806 out.go:296] Setting OutFile to fd 1 ...
	I0830 23:03:06.433853 1540806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:03:06.433880 1540806 out.go:309] Setting ErrFile to fd 2...
	I0830 23:03:06.433921 1540806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:03:06.434275 1540806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
	I0830 23:03:06.434655 1540806 out.go:303] Setting JSON to false
	I0830 23:03:06.435818 1540806 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27922,"bootTime":1693408664,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0830 23:03:06.435928 1540806 start.go:138] virtualization:  
	I0830 23:03:06.440774 1540806 out.go:177] * [functional-489151] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0830 23:03:06.443042 1540806 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 23:03:06.445310 1540806 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 23:03:06.443142 1540806 notify.go:220] Checking for updates...
	I0830 23:03:06.449989 1540806 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	I0830 23:03:06.452335 1540806 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	I0830 23:03:06.454526 1540806 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 23:03:06.456630 1540806 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 23:03:06.458975 1540806 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 23:03:06.459521 1540806 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 23:03:06.482510 1540806 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 23:03:06.482604 1540806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 23:03:06.573897 1540806 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-30 23:03:06.56426045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 23:03:06.574000 1540806 docker.go:294] overlay module found
	I0830 23:03:06.578214 1540806 out.go:177] * Using the docker driver based on existing profile
	I0830 23:03:06.580734 1540806 start.go:298] selected driver: docker
	I0830 23:03:06.580751 1540806 start.go:902] validating driver "docker" against &{Name:functional-489151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-489151 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 23:03:06.580860 1540806 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 23:03:06.584212 1540806 out.go:177] 
	W0830 23:03:06.586514 1540806 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0830 23:03:06.588695 1540806 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-489151 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-489151 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-489151 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (357.080131ms)

                                                
                                                
-- stdout --
	* [functional-489151] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 23:03:07.014949 1540923 out.go:296] Setting OutFile to fd 1 ...
	I0830 23:03:07.015171 1540923 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:03:07.015198 1540923 out.go:309] Setting ErrFile to fd 2...
	I0830 23:03:07.015219 1540923 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:03:07.015623 1540923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
	I0830 23:03:07.016139 1540923 out.go:303] Setting JSON to false
	I0830 23:03:07.017536 1540923 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27923,"bootTime":1693408664,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1043-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0830 23:03:07.017627 1540923 start.go:138] virtualization:  
	I0830 23:03:07.020448 1540923 out.go:177] * [functional-489151] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0830 23:03:07.022802 1540923 out.go:177]   - MINIKUBE_LOCATION=17114
	I0830 23:03:07.025419 1540923 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0830 23:03:07.022908 1540923 notify.go:220] Checking for updates...
	I0830 23:03:07.030411 1540923 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	I0830 23:03:07.032595 1540923 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	I0830 23:03:07.034601 1540923 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0830 23:03:07.036974 1540923 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0830 23:03:07.039980 1540923 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 23:03:07.040553 1540923 driver.go:373] Setting default libvirt URI to qemu:///system
	I0830 23:03:07.101828 1540923 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0830 23:03:07.101920 1540923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 23:03:07.261445 1540923 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-30 23:03:07.247441441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 23:03:07.261545 1540923 docker.go:294] overlay module found
	I0830 23:03:07.263873 1540923 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0830 23:03:07.266017 1540923 start.go:298] selected driver: docker
	I0830 23:03:07.266038 1540923 start.go:902] validating driver "docker" against &{Name:functional-489151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1692872184-17120@sha256:42602f0d347faca66d9347bdc33243fe5f4d6b3fff3ba53f3b2fc2d5fe63e9ec Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-489151 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0830 23:03:07.266144 1540923 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0830 23:03:07.268635 1540923 out.go:177] 
	W0830 23:03:07.270678 1540923 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0830 23:03:07.272720 1540923 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-489151 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-489151 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-84nlf" [06871b21-6252-4e6d-951d-7d940ad8ebd2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-84nlf" [06871b21-6252-4e6d-951d-7d940ad8ebd2] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.016330127s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31460
functional_test.go:1674: http://192.168.49.2:31460: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-84nlf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31460
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3626f36f-2896-4692-b0d7-f0ffc08253bd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011619064s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-489151 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-489151 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-489151 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-489151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6ed3d95d-e8b9-401a-8134-36730cda94a9] Pending
helpers_test.go:344: "sp-pod" [6ed3d95d-e8b9-401a-8134-36730cda94a9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6ed3d95d-e8b9-401a-8134-36730cda94a9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.016832359s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-489151 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-489151 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-489151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9d69287d-8a83-4fc1-abd7-177a6458f891] Pending
helpers_test.go:344: "sp-pod" [9d69287d-8a83-4fc1-abd7-177a6458f891] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9d69287d-8a83-4fc1-abd7-177a6458f891] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.037418684s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-489151 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh -n functional-489151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 cp functional-489151:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1215011321/001/cp-test.txt
E0830 23:02:20.065051 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh -n functional-489151 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1502303/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo cat /etc/test/nested/copy/1502303/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1502303.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo cat /etc/ssl/certs/1502303.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1502303.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo cat /usr/share/ca-certificates/1502303.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15023032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo cat /etc/ssl/certs/15023032.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15023032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo cat /usr/share/ca-certificates/15023032.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-489151 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 ssh "sudo systemctl is-active crio": exit status 1 (361.247797ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-489151 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-489151
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-489151
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-489151 image ls --format short --alsologtostderr:
I0830 23:03:15.597861 1542308 out.go:296] Setting OutFile to fd 1 ...
I0830 23:03:15.598007 1542308 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:15.598012 1542308 out.go:309] Setting ErrFile to fd 2...
I0830 23:03:15.598029 1542308 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:15.598288 1542308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
I0830 23:03:15.598855 1542308 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:15.598974 1542308 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:15.599423 1542308 cli_runner.go:164] Run: docker container inspect functional-489151 --format={{.State.Status}}
I0830 23:03:15.625172 1542308 ssh_runner.go:195] Run: systemctl --version
I0830 23:03:15.625224 1542308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-489151
I0830 23:03:15.651851 1542308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/functional-489151/id_rsa Username:docker}
I0830 23:03:15.762168 1542308 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-489151 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.1           | b4a5a57e99492 | 57.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 812f5241df7fd | 68.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| gcr.io/google-containers/addon-resizer      | functional-489151 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/localhost/my-image                | functional-489151 | bd8793181c6ea | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-489151 | 4e0ce14c3872d | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.1           | b29fb62480892 | 119MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 8b6e1980b7584 | 116MB  |
| docker.io/library/nginx                     | latest            | ab73c7fd67234 | 192MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-489151 image ls --format table --alsologtostderr:
I0830 23:03:19.564073 1542852 out.go:296] Setting OutFile to fd 1 ...
I0830 23:03:19.564253 1542852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:19.564266 1542852 out.go:309] Setting ErrFile to fd 2...
I0830 23:03:19.564272 1542852 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:19.564591 1542852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
I0830 23:03:19.565251 1542852 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:19.565426 1542852 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:19.565928 1542852 cli_runner.go:164] Run: docker container inspect functional-489151 --format={{.State.Status}}
I0830 23:03:19.584057 1542852 ssh_runner.go:195] Run: systemctl --version
I0830 23:03:19.584110 1542852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-489151
I0830 23:03:19.602361 1542852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/functional-489151/id_rsa Username:docker}
I0830 23:03:19.702830 1542852 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls --format json --alsologtostderr
2023/08/30 23:03:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-489151 image ls --format json --alsologtostderr:
[{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"57800000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"116000000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481
e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-489151"],"size":"32900000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"4e0ce14c3872d7a7fc438bd3f5895475caa61247405cfeed7c80e
603de9ef361","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-489151"],"size":"30"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"119000000"},{"id":"ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"bd8793181c6ea928a7c89926c6d1df82b4b3c106e9a623b18be0c6c15ca58228","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-489151"],"size":"1410000"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":[],"repoTags":["registry
.k8s.io/kube-proxy:v1.28.1"],"size":"68300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-489151 image ls --format json --alsologtostderr:
I0830 23:03:19.278992 1542826 out.go:296] Setting OutFile to fd 1 ...
I0830 23:03:19.279189 1542826 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:19.279200 1542826 out.go:309] Setting ErrFile to fd 2...
I0830 23:03:19.279206 1542826 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:19.279475 1542826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
I0830 23:03:19.280070 1542826 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:19.280197 1542826 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:19.280727 1542826 cli_runner.go:164] Run: docker container inspect functional-489151 --format={{.State.Status}}
I0830 23:03:19.307952 1542826 ssh_runner.go:195] Run: systemctl --version
I0830 23:03:19.308004 1542826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-489151
I0830 23:03:19.339695 1542826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/functional-489151/id_rsa Username:docker}
I0830 23:03:19.442821 1542826 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-489151 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-489151
size: "32900000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "57800000"
- id: ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "68300000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "116000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "119000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 4e0ce14c3872d7a7fc438bd3f5895475caa61247405cfeed7c80e603de9ef361
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-489151
size: "30"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-489151 image ls --format yaml --alsologtostderr:
I0830 23:03:15.939586 1542384 out.go:296] Setting OutFile to fd 1 ...
I0830 23:03:15.939811 1542384 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:15.939837 1542384 out.go:309] Setting ErrFile to fd 2...
I0830 23:03:15.939856 1542384 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:15.940173 1542384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
I0830 23:03:15.940826 1542384 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:15.941006 1542384 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:15.941536 1542384 cli_runner.go:164] Run: docker container inspect functional-489151 --format={{.State.Status}}
I0830 23:03:15.969589 1542384 ssh_runner.go:195] Run: systemctl --version
I0830 23:03:15.969645 1542384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-489151
I0830 23:03:16.027827 1542384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/functional-489151/id_rsa Username:docker}
I0830 23:03:16.195517 1542384 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 ssh pgrep buildkitd: exit status 1 (456.822981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image build -t localhost/my-image:functional-489151 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 image build -t localhost/my-image:functional-489151 testdata/build --alsologtostderr: (2.236580957s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-489151 image build -t localhost/my-image:functional-489151 testdata/build --alsologtostderr:
I0830 23:03:16.852454 1542478 out.go:296] Setting OutFile to fd 1 ...
I0830 23:03:16.854398 1542478 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:16.854413 1542478 out.go:309] Setting ErrFile to fd 2...
I0830 23:03:16.854420 1542478 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0830 23:03:16.854687 1542478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
I0830 23:03:16.855293 1542478 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:16.855978 1542478 config.go:182] Loaded profile config "functional-489151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0830 23:03:16.856578 1542478 cli_runner.go:164] Run: docker container inspect functional-489151 --format={{.State.Status}}
I0830 23:03:16.876178 1542478 ssh_runner.go:195] Run: systemctl --version
I0830 23:03:16.876230 1542478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-489151
I0830 23:03:16.899035 1542478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/functional-489151/id_rsa Username:docker}
I0830 23:03:17.006274 1542478 build_images.go:151] Building image from path: /tmp/build.667856608.tar
I0830 23:03:17.006361 1542478 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0830 23:03:17.029899 1542478 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.667856608.tar
I0830 23:03:17.036642 1542478 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.667856608.tar: stat -c "%s %y" /var/lib/minikube/build/build.667856608.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.667856608.tar': No such file or directory
I0830 23:03:17.036670 1542478 ssh_runner.go:362] scp /tmp/build.667856608.tar --> /var/lib/minikube/build/build.667856608.tar (3072 bytes)
I0830 23:03:17.084238 1542478 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.667856608
I0830 23:03:17.097016 1542478 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.667856608 -xf /var/lib/minikube/build/build.667856608.tar
I0830 23:03:17.109231 1542478 docker.go:339] Building image: /var/lib/minikube/build/build.667856608
I0830 23:03:17.109300 1542478 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-489151 /var/lib/minikube/build/build.667856608
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context:
#1 transferring context: 2B 0.0s done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B 0.0s done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.7s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:bd8793181c6ea928a7c89926c6d1df82b4b3c106e9a623b18be0c6c15ca58228 done
#8 naming to localhost/my-image:functional-489151 done
#8 DONE 0.1s
I0830 23:03:18.940274 1542478 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-489151 /var/lib/minikube/build/build.667856608: (1.830954683s)
I0830 23:03:18.940340 1542478 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.667856608
I0830 23:03:18.951210 1542478 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.667856608.tar
I0830 23:03:18.961621 1542478 build_images.go:207] Built localhost/my-image:functional-489151 from /tmp/build.667856608.tar
I0830 23:03:18.961646 1542478 build_images.go:123] succeeded building to: functional-489151
I0830 23:03:18.961651 1542478 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.729726586s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-489151
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-489151 docker-env) && out/minikube-linux-arm64 status -p functional-489151"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-489151 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image load --daemon gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 image load --daemon gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr: (4.172509408s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-489151 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-489151 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-k622s" [d8c0910b-7f1e-499d-a71f-4c4f7b504091] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-k622s" [d8c0910b-7f1e-499d-a71f-4c4f7b504091] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.033164327s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image load --daemon gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 image load --daemon gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr: (2.679030788s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.74779607s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-489151
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image load --daemon gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 image load --daemon gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr: (3.210581161s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image save gcr.io/google-containers/addon-resizer:functional-489151 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 image save gcr.io/google-containers/addon-resizer:functional-489151 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.046166755s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 service list -o json
functional_test.go:1493: Took "526.053511ms" to run "out/minikube-linux-arm64 -p functional-489151 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image rm gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30116
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.112122165s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30116
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-489151
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 image save --daemon gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-489151 image save --daemon gcr.io/google-containers/addon-resizer:functional-489151 --alsologtostderr: (1.308035517s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-489151
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-489151 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-489151 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-489151 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1538517: os: process already finished
helpers_test.go:502: unable to terminate pid 1538410: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-489151 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-489151 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-489151 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [07d0e1f0-540a-470b-bb4a-08951fad4212] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [07d0e1f0-540a-470b-bb4a-08951fad4212] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.02027794s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-489151 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.121.34 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-489151 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "355.583237ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "72.431783ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "353.754824ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "63.617452ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdany-port3566962605/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1693436580656514604" to /tmp/TestFunctionalparallelMountCmdany-port3566962605/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1693436580656514604" to /tmp/TestFunctionalparallelMountCmdany-port3566962605/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1693436580656514604" to /tmp/TestFunctionalparallelMountCmdany-port3566962605/001/test-1693436580656514604
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T /mount-9p | grep 9p"
E0830 23:03:01.025569 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.290911ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 30 23:03 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 30 23:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 30 23:03 test-1693436580656514604
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh cat /mount-9p/test-1693436580656514604
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-489151 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [62ec4dfb-c071-497e-ab50-a233d949c086] Pending
helpers_test.go:344: "busybox-mount" [62ec4dfb-c071-497e-ab50-a233d949c086] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [62ec4dfb-c071-497e-ab50-a233d949c086] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [62ec4dfb-c071-497e-ab50-a233d949c086] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.019704115s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-489151 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdany-port3566962605/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdspecific-port2574868011/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (639.010458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdspecific-port2574868011/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 ssh "sudo umount -f /mount-9p": exit status 1 (362.676866ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-489151 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdspecific-port2574868011/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3489258106/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3489258106/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3489258106/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T" /mount1: exit status 1 (1.284693993s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-489151 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-489151 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3489258106/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3489258106/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-489151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3489258106/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-489151
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-489151
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-489151
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-693828 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-693828 --driver=docker  --container-runtime=docker: (32.775542924s)
--- PASS: TestImageBuild/serial/Setup (32.78s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-693828
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-693828: (1.809429972s)
--- PASS: TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-693828
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-693828
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-693828
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (109.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-211142 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0830 23:04:22.946314 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-211142 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m49.467742911s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (109.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-211142 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-211142 addons enable ingress --alsologtostderr -v=5: (10.436212365s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-211142 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-353896 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0830 23:07:06.786565 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:07:25.342361 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:25.347632 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:25.357917 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:25.378167 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:25.418409 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:25.498714 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:25.659064 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:25.979274 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:26.620110 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:27.901069 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:30.462492 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:35.582674 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:07:45.823512 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:08:06.303723 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-353896 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m22.248535034s)
--- PASS: TestJSONOutput/start/Command (82.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-353896 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-353896 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-353896 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-353896 --output=json --user=testUser: (10.924432702s)
--- PASS: TestJSONOutput/stop/Command (10.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-446427 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-446427 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.784589ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6ed50fac-b74b-4941-8b36-3c4c6fda9f9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-446427] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5cdf0f2-8226-4e26-864d-9c500e52e240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17114"}}
	{"specversion":"1.0","id":"1dd34c6b-2a17-462f-b239-65e7e8640252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eee2074b-2d73-4510-913c-25c26182cba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig"}}
	{"specversion":"1.0","id":"67d71a6b-a447-4364-880e-30fd4cb9963b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube"}}
	{"specversion":"1.0","id":"5530c569-c58f-4156-b239-ca3e3db0cd9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dd77c696-ce83-4d9f-83d4-d2d0204fe209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca868afa-215b-423c-987d-a5a9d3cb7dbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-446427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-446427
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-869861 --network=
E0830 23:08:47.263913 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-869861 --network=: (34.089461714s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-869861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-869861
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-869861: (2.084918223s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-263755 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-263755 --network=bridge: (32.379589882s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-263755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-263755
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-263755: (1.9728499s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.38s)

                                                
                                    
x
+
TestKicExistingNetwork (37.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-458624 --network=existing-network
E0830 23:10:09.184853 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-458624 --network=existing-network: (35.554654086s)
helpers_test.go:175: Cleaning up "existing-network-458624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-458624
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-458624: (1.971444134s)
--- PASS: TestKicExistingNetwork (37.68s)

                                                
                                    
x
+
TestKicCustomSubnet (34.6s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-974834 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-974834 --subnet=192.168.60.0/24: (32.522076605s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-974834 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-974834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-974834
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-974834: (2.055006924s)
--- PASS: TestKicCustomSubnet (34.60s)

                                                
                                    
x
+
TestKicStaticIP (35.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-516602 --static-ip=192.168.200.200
E0830 23:11:02.226318 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:02.231590 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:02.241818 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:02.262062 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:02.302379 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:02.382628 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:02.543251 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:02.863803 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:03.504620 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:04.785220 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:07.345594 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:12.466493 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:11:22.707225 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-516602 --static-ip=192.168.200.200: (33.640497364s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-516602 ip
helpers_test.go:175: Cleaning up "static-ip-516602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-516602
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-516602: (2.014557831s)
--- PASS: TestKicStaticIP (35.84s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-603887 --driver=docker  --container-runtime=docker
E0830 23:11:39.102893 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:11:43.188398 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-603887 --driver=docker  --container-runtime=docker: (31.453256852s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-606562 --driver=docker  --container-runtime=docker
E0830 23:12:24.148602 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:12:25.342443 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-606562 --driver=docker  --container-runtime=docker: (32.844486366s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-603887
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-606562
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-606562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-606562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-606562: (2.143228009s)
helpers_test.go:175: Cleaning up "first-603887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-603887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-603887: (2.100509143s)
--- PASS: TestMinikubeProfile (69.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-603254 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0830 23:12:53.025062 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-603254 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.005497605s)
--- PASS: TestMountStart/serial/StartWithMountFirst (11.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-603254 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-605084 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-605084 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.76127802s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-605084 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.5s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-603254 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-603254 --alsologtostderr -v=5: (1.504247859s)
--- PASS: TestMountStart/serial/DeleteFirst (1.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-605084 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-605084
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-605084: (1.216826413s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-605084
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-605084: (8.535960624s)
--- PASS: TestMountStart/serial/RestartStopped (9.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-605084 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-200454 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0830 23:13:46.069217 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-200454 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m19.013510003s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (45.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-200454 -- rollout status deployment/busybox: (2.67881939s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-gl24h -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-rbc5k -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-gl24h -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-rbc5k -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-gl24h -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-rbc5k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (45.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-gl24h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-gl24h -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-rbc5k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-200454 -- exec busybox-5bc68d56bd-rbc5k -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.17s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-200454 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-200454 -v 3 --alsologtostderr: (19.990824124s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.77s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp testdata/cp-test.txt multinode-200454:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1465979527/001/cp-test_multinode-200454.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454:/home/docker/cp-test.txt multinode-200454-m02:/home/docker/cp-test_multinode-200454_multinode-200454-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m02 "sudo cat /home/docker/cp-test_multinode-200454_multinode-200454-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454:/home/docker/cp-test.txt multinode-200454-m03:/home/docker/cp-test_multinode-200454_multinode-200454-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m03 "sudo cat /home/docker/cp-test_multinode-200454_multinode-200454-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp testdata/cp-test.txt multinode-200454-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1465979527/001/cp-test_multinode-200454-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454-m02:/home/docker/cp-test.txt multinode-200454:/home/docker/cp-test_multinode-200454-m02_multinode-200454.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454 "sudo cat /home/docker/cp-test_multinode-200454-m02_multinode-200454.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454-m02:/home/docker/cp-test.txt multinode-200454-m03:/home/docker/cp-test_multinode-200454-m02_multinode-200454-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m03 "sudo cat /home/docker/cp-test_multinode-200454-m02_multinode-200454-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp testdata/cp-test.txt multinode-200454-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1465979527/001/cp-test_multinode-200454-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454-m03:/home/docker/cp-test.txt multinode-200454:/home/docker/cp-test_multinode-200454-m03_multinode-200454.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454 "sudo cat /home/docker/cp-test_multinode-200454-m03_multinode-200454.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 cp multinode-200454-m03:/home/docker/cp-test.txt multinode-200454-m02:/home/docker/cp-test_multinode-200454-m03_multinode-200454-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 ssh -n multinode-200454-m02 "sudo cat /home/docker/cp-test_multinode-200454-m03_multinode-200454-m02.txt"
E0830 23:16:02.225319 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/CopyFile (11.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-200454 node stop m03: (1.256356704s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-200454 status: exit status 7 (574.359003ms)

                                                
                                                
-- stdout --
	multinode-200454
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200454-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-200454-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-200454 status --alsologtostderr: exit status 7 (571.382472ms)

                                                
                                                
-- stdout --
	multinode-200454
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200454-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-200454-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 23:16:04.415683 1607337 out.go:296] Setting OutFile to fd 1 ...
	I0830 23:16:04.415918 1607337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:16:04.415944 1607337 out.go:309] Setting ErrFile to fd 2...
	I0830 23:16:04.415963 1607337 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:16:04.416258 1607337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
	I0830 23:16:04.416463 1607337 out.go:303] Setting JSON to false
	I0830 23:16:04.416553 1607337 mustload.go:65] Loading cluster: multinode-200454
	I0830 23:16:04.416634 1607337 notify.go:220] Checking for updates...
	I0830 23:16:04.419500 1607337 config.go:182] Loaded profile config "multinode-200454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 23:16:04.419546 1607337 status.go:255] checking status of multinode-200454 ...
	I0830 23:16:04.420063 1607337 cli_runner.go:164] Run: docker container inspect multinode-200454 --format={{.State.Status}}
	I0830 23:16:04.446303 1607337 status.go:330] multinode-200454 host status = "Running" (err=<nil>)
	I0830 23:16:04.446327 1607337 host.go:66] Checking if "multinode-200454" exists ...
	I0830 23:16:04.446628 1607337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200454
	I0830 23:16:04.475225 1607337 host.go:66] Checking if "multinode-200454" exists ...
	I0830 23:16:04.475522 1607337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 23:16:04.475577 1607337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200454
	I0830 23:16:04.495914 1607337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34417 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/multinode-200454/id_rsa Username:docker}
	I0830 23:16:04.599384 1607337 ssh_runner.go:195] Run: systemctl --version
	I0830 23:16:04.604792 1607337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 23:16:04.618274 1607337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0830 23:16:04.690539 1607337 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-08-30 23:16:04.680291144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1043-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0830 23:16:04.691221 1607337 kubeconfig.go:92] found "multinode-200454" server: "https://192.168.58.2:8443"
	I0830 23:16:04.691242 1607337 api_server.go:166] Checking apiserver status ...
	I0830 23:16:04.691286 1607337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0830 23:16:04.704590 1607337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2086/cgroup
	I0830 23:16:04.716246 1607337 api_server.go:182] apiserver freezer: "7:freezer:/docker/3fd4e534f11388f031d456b5fc6d6def218eca052ca47cde270c8807b2f7676d/kubepods/burstable/pod4710d2fa2d20b4e93c4cc683e102dcda/568a513361d781d70d981ac77f9d5fdb6e163e0f84b935958f09404b29ddd0e3"
	I0830 23:16:04.716317 1607337 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3fd4e534f11388f031d456b5fc6d6def218eca052ca47cde270c8807b2f7676d/kubepods/burstable/pod4710d2fa2d20b4e93c4cc683e102dcda/568a513361d781d70d981ac77f9d5fdb6e163e0f84b935958f09404b29ddd0e3/freezer.state
	I0830 23:16:04.726490 1607337 api_server.go:204] freezer state: "THAWED"
	I0830 23:16:04.726518 1607337 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0830 23:16:04.735509 1607337 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0830 23:16:04.735534 1607337 status.go:421] multinode-200454 apiserver status = Running (err=<nil>)
	I0830 23:16:04.735544 1607337 status.go:257] multinode-200454 status: &{Name:multinode-200454 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0830 23:16:04.735561 1607337 status.go:255] checking status of multinode-200454-m02 ...
	I0830 23:16:04.735920 1607337 cli_runner.go:164] Run: docker container inspect multinode-200454-m02 --format={{.State.Status}}
	I0830 23:16:04.753332 1607337 status.go:330] multinode-200454-m02 host status = "Running" (err=<nil>)
	I0830 23:16:04.753353 1607337 host.go:66] Checking if "multinode-200454-m02" exists ...
	I0830 23:16:04.753651 1607337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200454-m02
	I0830 23:16:04.772498 1607337 host.go:66] Checking if "multinode-200454-m02" exists ...
	I0830 23:16:04.772859 1607337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0830 23:16:04.772948 1607337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200454-m02
	I0830 23:16:04.794682 1607337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34422 SSHKeyPath:/home/jenkins/minikube-integration/17114-1496922/.minikube/machines/multinode-200454-m02/id_rsa Username:docker}
	I0830 23:16:04.891249 1607337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0830 23:16:04.904520 1607337 status.go:257] multinode-200454-m02 status: &{Name:multinode-200454-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0830 23:16:04.904553 1607337 status.go:255] checking status of multinode-200454-m03 ...
	I0830 23:16:04.904859 1607337 cli_runner.go:164] Run: docker container inspect multinode-200454-m03 --format={{.State.Status}}
	I0830 23:16:04.923960 1607337 status.go:330] multinode-200454-m03 host status = "Stopped" (err=<nil>)
	I0830 23:16:04.923988 1607337 status.go:343] host is not running, skipping remaining checks
	I0830 23:16:04.923994 1607337 status.go:257] multinode-200454-m03 status: &{Name:multinode-200454-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (14.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-200454 node start m03 --alsologtostderr: (13.170527813s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (14.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-200454
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-200454
E0830 23:16:29.909446 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:16:39.102987 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-200454: (22.743739185s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-200454 --wait=true -v=8 --alsologtostderr
E0830 23:17:25.341884 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:18:02.147206 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-200454 --wait=true -v=8 --alsologtostderr: (1m36.884645472s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-200454
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-200454 node delete m03: (4.349879025s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-200454 stop: (21.631468719s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-200454 status: exit status 7 (95.277566ms)

                                                
                                                
-- stdout --
	multinode-200454
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-200454-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-200454 status --alsologtostderr: exit status 7 (94.841585ms)

                                                
                                                
-- stdout --
	multinode-200454
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-200454-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0830 23:18:45.649206 1623122 out.go:296] Setting OutFile to fd 1 ...
	I0830 23:18:45.649321 1623122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:18:45.649331 1623122 out.go:309] Setting ErrFile to fd 2...
	I0830 23:18:45.649336 1623122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0830 23:18:45.649596 1623122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17114-1496922/.minikube/bin
	I0830 23:18:45.649765 1623122 out.go:303] Setting JSON to false
	I0830 23:18:45.649829 1623122 mustload.go:65] Loading cluster: multinode-200454
	I0830 23:18:45.649926 1623122 notify.go:220] Checking for updates...
	I0830 23:18:45.650200 1623122 config.go:182] Loaded profile config "multinode-200454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0830 23:18:45.650215 1623122 status.go:255] checking status of multinode-200454 ...
	I0830 23:18:45.650662 1623122 cli_runner.go:164] Run: docker container inspect multinode-200454 --format={{.State.Status}}
	I0830 23:18:45.670398 1623122 status.go:330] multinode-200454 host status = "Stopped" (err=<nil>)
	I0830 23:18:45.670424 1623122 status.go:343] host is not running, skipping remaining checks
	I0830 23:18:45.670430 1623122 status.go:257] multinode-200454 status: &{Name:multinode-200454 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0830 23:18:45.670467 1623122 status.go:255] checking status of multinode-200454-m02 ...
	I0830 23:18:45.670755 1623122 cli_runner.go:164] Run: docker container inspect multinode-200454-m02 --format={{.State.Status}}
	I0830 23:18:45.687493 1623122 status.go:330] multinode-200454-m02 host status = "Stopped" (err=<nil>)
	I0830 23:18:45.687513 1623122 status.go:343] host is not running, skipping remaining checks
	I0830 23:18:45.687519 1623122 status.go:257] multinode-200454-m02 status: &{Name:multinode-200454-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-200454 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-200454 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.511419032s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-200454 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-200454
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-200454-m02 --driver=docker  --container-runtime=docker
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-200454-m02 --driver=docker  --container-runtime=docker: exit status 14 (95.520946ms)

                                                
                                                
-- stdout --
	* [multinode-200454-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-200454-m02' is duplicated with machine name 'multinode-200454-m02' in profile 'multinode-200454'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-200454-m03 --driver=docker  --container-runtime=docker
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-200454-m03 --driver=docker  --container-runtime=docker: (33.630348931s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-200454
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-200454: exit status 80 (352.331765ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-200454
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-200454-m03 already exists in multinode-200454-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-200454-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-200454-m03: (2.098561603s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.24s)

                                                
                                    
x
+
TestPreload (164.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-564218 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0830 23:21:02.226324 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:21:39.103367 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:22:25.342450 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-564218 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m40.948079173s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-564218 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-564218 image pull gcr.io/k8s-minikube/busybox: (1.564287463s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-564218
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-564218: (10.86963752s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-564218 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-564218 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (49.034795207s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-564218 image list
helpers_test.go:175: Cleaning up "test-preload-564218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-564218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-564218: (2.162138452s)
--- PASS: TestPreload (164.83s)

                                                
                                    
x
+
TestScheduledStopUnix (108.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-352286 --memory=2048 --driver=docker  --container-runtime=docker
E0830 23:23:48.385265 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-352286 --memory=2048 --driver=docker  --container-runtime=docker: (35.499507822s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-352286 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-352286 -n scheduled-stop-352286
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-352286 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-352286 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-352286 -n scheduled-stop-352286
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-352286
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-352286 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-352286
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-352286: exit status 7 (79.017691ms)

                                                
                                                
-- stdout --
	scheduled-stop-352286
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-352286 -n scheduled-stop-352286
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-352286 -n scheduled-stop-352286: exit status 7 (79.194429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-352286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-352286
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-352286: (1.629474482s)
--- PASS: TestScheduledStopUnix (108.87s)

                                                
                                    
x
+
TestSkaffold (104.06s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3925125495 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-345815 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-345815 --memory=2600 --driver=docker  --container-runtime=docker: (31.526330159s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3925125495 run --minikube-profile skaffold-345815 --kube-context skaffold-345815 --status-check=true --port-forward=false --interactive=false
E0830 23:26:02.225864 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:26:39.103530 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3925125495 run --minikube-profile skaffold-345815 --kube-context skaffold-345815 --status-check=true --port-forward=false --interactive=false: (57.69919989s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6fb9bdbfb4-42kfc" [17d39d79-338c-45ee-b064-7b4bee0ac9e3] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.025053009s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5f865649fb-lrmbd" [d8211149-97a0-4d0f-9640-c9d8a598586c] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009465215s
helpers_test.go:175: Cleaning up "skaffold-345815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-345815
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-345815: (2.833511525s)
--- PASS: TestSkaffold (104.06s)

                                                
                                    
x
+
TestInsufficientStorage (11.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-802571 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-802571 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.712906143s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d050035e-618a-452e-8e0c-c1ee41bf8e6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-802571] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea70d1e5-51f1-49fa-9bdd-545ef5a5423c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17114"}}
	{"specversion":"1.0","id":"0d93f44c-e16e-4ba6-8962-96c2f63f15b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3f4bceed-9885-4d5e-8781-3b1e0c99fc23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig"}}
	{"specversion":"1.0","id":"3b1d4014-fe0b-478c-a779-eecc0aaaea52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube"}}
	{"specversion":"1.0","id":"65cd327b-bd82-48de-a6e8-81e2fd59d2c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e62547f5-3f4f-4f11-ba99-c27bae55ffd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d1929c55-373e-4e49-b58d-a96afb1ceeac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b0b45cfb-7bb6-491d-bc98-c857b9a4a1a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"56ef7f07-9c93-4ed3-8f93-096232124c55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b482f19e-f803-4596-98ad-5c3e9f663446","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a94ea007-72eb-46c5-9416-5dd5817dd3da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-802571 in cluster insufficient-storage-802571","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4de422b4-5bf3-47ae-9e65-6cd3b750e2f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7688c934-ff0a-40ee-828a-49cdf06ed92e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9bb3f00b-97bd-4662-8562-23ddee57584b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-802571 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-802571 --output=json --layout=cluster: exit status 7 (332.920083ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-802571","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-802571","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 23:27:15.949883 1658826 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-802571" does not appear in /home/jenkins/minikube-integration/17114-1496922/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-802571 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-802571 --output=json --layout=cluster: exit status 7 (316.580028ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-802571","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-802571","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0830 23:27:16.268477 1658879 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-802571" does not appear in /home/jenkins/minikube-integration/17114-1496922/kubeconfig
	E0830 23:27:16.280351 1658879 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/insufficient-storage-802571/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-802571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-802571
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-802571: (1.690968912s)
--- PASS: TestInsufficientStorage (11.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-068991 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0830 23:36:02.226323 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-068991 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (55.13258013s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-068991
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-068991: (1.300264612s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-068991 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-068991 status --format={{.Host}}: exit status 7 (79.36436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-068991 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0830 23:36:39.102732 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:36:54.038785 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-068991 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m51.046930406s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-068991 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-068991 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-068991 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (127.761399ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-068991] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-068991
	    minikube start -p kubernetes-upgrade-068991 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0689912 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-068991 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-068991 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-068991 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.657479693s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-068991" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-068991
E0830 23:41:54.038153 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-068991: (3.069512072s)
--- PASS: TestKubernetesUpgrade (389.57s)

                                                
                                    
x
+
TestMissingContainerUpgrade (140.41s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.648765173.exe start -p missing-upgrade-795061 --memory=2200 --driver=docker  --container-runtime=docker
E0830 23:33:15.961284 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.648765173.exe start -p missing-upgrade-795061 --memory=2200 --driver=docker  --container-runtime=docker: (1m2.114023485s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-795061
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-795061: (10.403015863s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-795061
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-795061 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0830 23:34:37.881539 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:34:42.147463 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-795061 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m4.547636719s)
helpers_test.go:175: Cleaning up "missing-upgrade-795061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-795061
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-795061: (2.25297085s)
--- PASS: TestMissingContainerUpgrade (140.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.3915445139.exe start -p stopped-upgrade-819247 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0830 23:37:21.722195 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:37:25.342749 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.3915445139.exe start -p stopped-upgrade-819247 --memory=2200 --vm-driver=docker  --container-runtime=docker: (52.557272175s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.3915445139.exe -p stopped-upgrade-819247 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.3915445139.exe -p stopped-upgrade-819247 stop: (10.85219045s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-819247 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:210: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-819247 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.100011485s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-819247
version_upgrade_test.go:218: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-819247: (1.668180137s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.67s)

                                                
                                    
x
+
TestPause/serial/Start (49.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-546733 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-546733 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (49.694523898s)
--- PASS: TestPause/serial/Start (49.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-546733 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0830 23:40:28.386208 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-546733 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.6170086s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.64s)

                                                
                                    
x
+
TestPause/serial/Pause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-546733 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.63s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-546733 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-546733 --output=json --layout=cluster: exit status 2 (357.380349ms)

                                                
                                                
-- stdout --
	{"Name":"pause-546733","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-546733","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-546733 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.15s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-546733 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-546733 --alsologtostderr -v=5: (1.15413296s)
--- PASS: TestPause/serial/PauseAgain (1.15s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-546733 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-546733 --alsologtostderr -v=5: (2.121641466s)
--- PASS: TestPause/serial/DeletePaused (2.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-546733
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-546733: exit status 1 (17.106152ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-546733: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-252241 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-252241 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (89.618652ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-252241] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17114
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17114-1496922/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17114-1496922/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-252241 --driver=docker  --container-runtime=docker
E0830 23:41:02.225768 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-252241 --driver=docker  --container-runtime=docker: (39.058131623s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-252241 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-252241 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-252241 --no-kubernetes --driver=docker  --container-runtime=docker: (5.796112298s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-252241 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-252241 status -o json: exit status 2 (344.686326ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-252241","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-252241
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-252241: (1.760926067s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-252241 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-252241 --no-kubernetes --driver=docker  --container-runtime=docker: (11.202635415s)
--- PASS: TestNoKubernetes/serial/Start (11.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-252241 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-252241 "sudo systemctl is-active --quiet service kubelet": exit status 1 (450.347506ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
E0830 23:41:39.102624 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (3.144541144s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-252241
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-252241: (1.244660301s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-252241 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-252241 --driver=docker  --container-runtime=docker: (8.950235084s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-252241 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-252241 "sudo systemctl is-active --quiet service kubelet": exit status 1 (362.324841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m34.514966207s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0830 23:42:25.342451 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m9.974294381s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fbbfj" [d9ace91b-28c7-4d5c-89d7-43dd646b27cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.036665043s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-47hwt" [02d12c26-5c91-46c7-9e86-f1634c0782b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-47hwt" [02d12c26-5c91-46c7-9e86-f1634c0782b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.01091121s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dqgkl" [772cb605-4bd6-409c-8691-4d13342a6cc2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dqgkl" [772cb605-4bd6-409c-8691-4d13342a6cc2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.033849966s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m23.379702234s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m10.483959052s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gwcrj" [6c6c685e-ec8f-4465-b2a6-e9566bd1f4ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.044778223s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g9mtr" [b54dd0a7-ff97-4aca-be59-61a4a4018a5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g9mtr" [b54dd0a7-ff97-4aca-be59-61a4a4018a5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.010406311s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5f52q" [fd251adf-3333-4ad7-a4ec-bb172a57e297] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5f52q" [fd251adf-3333-4ad7-a4ec-bb172a57e297] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.014271675s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (93.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m33.281426274s)
--- PASS: TestNetworkPlugins/group/false/Start (93.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (54.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0830 23:46:02.225863 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:46:39.103487 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (54.573684015s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (54.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2fq4s" [6fd8ad80-03ed-48bb-bdb3-d9a72fa10efd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0830 23:46:54.038098 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2fq4s" [6fd8ad80-03ed-48bb-bdb3-d9a72fa10efd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.010376156s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (34.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-504558 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-504558 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.224031463s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-504558 exec deployment/netcat -- nslookup kubernetes.default
E0830 23:47:25.342404 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-504558 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.20973224s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (34.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lp4dv" [1a174f6b-ece9-4c41-9c80-30147cc0df89] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lp4dv" [1a174f6b-ece9-4c41-9c80-30147cc0df89] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.010844832s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m8.231564402s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0830 23:48:06.303474 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:06.308916 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:06.319162 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:06.339412 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:06.379644 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:06.459893 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:06.620219 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:06.940699 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:07.580900 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:08.861476 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:11.421666 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:16.542171 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:17.082365 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:48:26.782507 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:28.305048 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:28.310252 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:28.320465 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:28.340686 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:28.380903 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:28.461134 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:28.621523 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:28.942271 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:29.583279 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:30.863473 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:33.423688 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:38.545207 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:48:47.262844 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:48:48.785977 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (58.868746075s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-25lcc" [26b5363e-c77b-4ae3-a9fd-60f88a37d50b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0830 23:49:09.266956 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-25lcc" [26b5363e-c77b-4ae3-a9fd-60f88a37d50b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.020908574s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9whl7" [c7d69b78-9ccf-403b-8b9c-f17c192988ea] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.029379675s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4jrc5" [e7e5bfa4-08c5-4f07-8953-858179419f74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4jrc5" [e7e5bfa4-08c5-4f07-8953-858179419f74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.021142287s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (95.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0830 23:49:50.227848 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-504558 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m35.387580327s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (95.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-184466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0830 23:50:08.764887 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:08.770134 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:08.780356 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:08.800707 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:08.841462 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:08.921726 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:09.082425 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:09.402992 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:10.043339 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:11.324040 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:13.884219 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:18.229834 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:18.234977 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:18.245226 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:18.265466 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:18.305811 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:18.386166 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:18.547039 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:18.867969 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:19.004375 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:19.508811 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:20.789635 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:23.350135 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:28.470910 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:29.244565 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:38.711118 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:50:49.725656 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:50:50.144114 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:50:59.191440 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:51:02.225721 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:51:12.148515 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-184466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m16.954005826s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-504558 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-504558 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8d5kk" [ab0222b7-c44c-423e-96f0-e1d93d2f13e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0830 23:51:22.148395 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-8d5kk" [ab0222b7-c44c-423e-96f0-e1d93d2f13e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.013648412s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-504558 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-504558 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)
E0831 00:08:20.981767 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-850853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1
E0830 23:51:53.363315 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:53.368586 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:53.378814 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:53.399052 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:53.439293 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:53.519538 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:53.680680 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:54.001169 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:54.038109 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:51:54.641372 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:55.921702 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:51:58.481987 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:52:03.603129 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-850853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1: (1m31.547059522s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-184466 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1d92219e-df0b-483f-b843-c1816f1f1bff] Pending
helpers_test.go:344: "busybox" [1d92219e-df0b-483f-b843-c1816f1f1bff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0830 23:52:13.843299 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1d92219e-df0b-483f-b843-c1816f1f1bff] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.042490375s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-184466 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-184466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-184466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.039344346s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-184466 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-184466 --alsologtostderr -v=3
E0830 23:52:25.341926 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:52:29.271407 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:29.276718 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:29.286950 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:29.307209 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:29.347772 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:29.428036 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:29.588374 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:29.908536 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:30.549204 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:31.829508 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:34.323966 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:52:34.390230 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-184466 --alsologtostderr -v=3: (11.256954856s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-184466 -n old-k8s-version-184466
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-184466 -n old-k8s-version-184466: exit status 7 (87.609899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-184466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (429.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-184466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0830 23:52:39.511203 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:49.752807 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:52:52.607105 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:53:02.072206 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:53:06.303360 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:53:10.233018 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:53:15.284739 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-184466 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (7m9.02709082s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-184466 -n old-k8s-version-184466
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (429.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-850853 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6c991dc1-7475-424e-9dda-dcb5f2d00bd8] Pending
helpers_test.go:344: "busybox" [6c991dc1-7475-424e-9dda-dcb5f2d00bd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6c991dc1-7475-424e-9dda-dcb5f2d00bd8] Running
E0830 23:53:28.304637 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.034101102s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-850853 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-850853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-850853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.215113699s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-850853 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-850853 --alsologtostderr -v=3
E0830 23:53:33.984557 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-850853 --alsologtostderr -v=3: (10.986265155s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-850853 -n no-preload-850853
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-850853 -n no-preload-850853: exit status 7 (80.069576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-850853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (320.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-850853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1
E0830 23:53:51.193777 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:53:55.988660 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:54:05.063782 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:05.069004 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:05.079259 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:05.099524 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:05.139764 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:05.220030 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:05.380472 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:05.700761 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:06.341787 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:07.622379 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:10.183177 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:12.437727 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:12.443023 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:12.453328 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:12.473584 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:12.513795 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:12.594087 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:12.754923 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:13.075445 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:13.716622 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:14.996829 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:15.304270 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:17.557899 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:22.678902 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:25.544728 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:32.919621 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:54:37.205247 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:54:46.024951 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:54:53.399788 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:55:08.765022 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:55:13.113979 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:55:18.230428 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:55:26.985656 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:55:34.360373 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:55:36.447797 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0830 23:55:45.912872 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0830 23:56:02.225884 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0830 23:56:17.316681 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:17.321976 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:17.332215 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:17.352447 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:17.392678 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:17.472942 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:17.633698 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:17.954267 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:18.595287 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:19.876257 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:22.437092 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:27.558083 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:37.798783 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:56:39.103150 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0830 23:56:48.906576 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:56:53.362866 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:56:54.038918 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0830 23:56:56.280541 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0830 23:56:58.279043 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:57:08.387072 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:57:21.045768 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0830 23:57:25.341962 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0830 23:57:29.271791 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:57:39.239489 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0830 23:57:56.954180 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0830 23:58:06.303373 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0830 23:58:28.304187 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0830 23:59:01.160328 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-850853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1: (5m20.203194916s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-850853 -n no-preload-850853
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (320.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s6llh" [99178347-07e2-413e-821b-4c0733579726] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0830 23:59:05.064282 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s6llh" [99178347-07e2-413e-821b-4c0733579726] Running
E0830 23:59:12.437596 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.032901868s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s6llh" [99178347-07e2-413e-821b-4c0733579726] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009967852s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-850853 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-850853 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-850853 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-850853 -n no-preload-850853
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-850853 -n no-preload-850853: exit status 2 (373.029322ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-850853 -n no-preload-850853
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-850853 -n no-preload-850853: exit status 2 (370.356798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-850853 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-850853 -n no-preload-850853
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-850853 -n no-preload-850853
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-356573 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1
E0830 23:59:32.747659 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0830 23:59:40.120953 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-356573 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1: (1m24.531524397s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-v2pld" [1f26aa16-5184-40af-b6e1-75d2e9176c38] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.029660406s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-v2pld" [1f26aa16-5184-40af-b6e1-75d2e9176c38] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010152863s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-184466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-184466 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-184466 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-184466 -n old-k8s-version-184466
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-184466 -n old-k8s-version-184466: exit status 2 (366.217834ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-184466 -n old-k8s-version-184466
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-184466 -n old-k8s-version-184466: exit status 2 (380.252048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-184466 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-184466 -n old-k8s-version-184466
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-184466 -n old-k8s-version-184466
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-441417 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1
E0831 00:00:08.765534 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0831 00:00:18.230132 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0831 00:00:45.271735 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-441417 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1: (1m34.573103016s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-356573 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dd8938b1-8d39-43c0-aa3c-df0e1a771e42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dd8938b1-8d39-43c0-aa3c-df0e1a771e42] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.035348133s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-356573 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-356573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-356573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058940474s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-356573 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-356573 --alsologtostderr -v=3
E0831 00:01:02.225760 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-356573 --alsologtostderr -v=3: (10.832684861s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-356573 -n embed-certs-356573
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-356573 -n embed-certs-356573: exit status 7 (84.079349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-356573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (343.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-356573 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1
E0831 00:01:17.317074 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-356573 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1: (5m42.636113528s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-356573 -n embed-certs-356573
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (343.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-441417 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f55be67c-1ec4-4499-acd7-017a42fb3872] Pending
E0831 00:01:39.103110 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f55be67c-1ec4-4499-acd7-017a42fb3872] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f55be67c-1ec4-4499-acd7-017a42fb3872] Running
E0831 00:01:45.001399 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.040151435s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-441417 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-441417 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-441417 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.076928107s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-441417 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-441417 --alsologtostderr -v=3
E0831 00:01:53.362904 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0831 00:01:54.037977 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-441417 --alsologtostderr -v=3: (11.09473095s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417: exit status 7 (91.235514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-441417 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (350.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-441417 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1
E0831 00:02:13.528318 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:13.533742 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:13.544043 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:13.564312 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:13.604574 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:13.684848 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:13.845204 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:14.166274 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:14.806478 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:16.086790 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:18.647120 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:23.768116 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:25.342277 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0831 00:02:29.271439 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0831 00:02:34.008610 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:02:54.488817 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:03:06.303818 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0831 00:03:20.981608 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:20.987032 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:20.997341 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:21.017565 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:21.057813 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:21.138170 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:21.298458 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:21.618796 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:22.259577 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:23.540249 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:26.100848 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:28.304265 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0831 00:03:31.221624 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:03:35.449417 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:03:41.462304 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:04:01.942494 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:04:05.064201 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/bridge-504558/client.crt: no such file or directory
E0831 00:04:12.437849 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/flannel-504558/client.crt: no such file or directory
E0831 00:04:29.344813 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
E0831 00:04:42.902801 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:04:51.348973 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0831 00:04:57.082569 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
E0831 00:04:57.370147 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
E0831 00:05:08.765157 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0831 00:05:18.229685 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0831 00:06:02.226361 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/ingress-addon-legacy-211142/client.crt: no such file or directory
E0831 00:06:04.823356 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:06:17.316544 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kubenet-504558/client.crt: no such file or directory
E0831 00:06:31.807964 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/calico-504558/client.crt: no such file or directory
E0831 00:06:39.103243 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
E0831 00:06:41.273900 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/custom-flannel-504558/client.crt: no such file or directory
E0831 00:06:53.363472 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/enable-default-cni-504558/client.crt: no such file or directory
E0831 00:06:54.038334 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/skaffold-345815/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-441417 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1: (5m50.005396698s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (350.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nnwrn" [44f0d7ca-9e68-4c0f-9369-d90ed8fed8f5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nnwrn" [44f0d7ca-9e68-4c0f-9369-d90ed8fed8f5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.028475646s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nnwrn" [44f0d7ca-9e68-4c0f-9369-d90ed8fed8f5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010758842s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-356573 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-356573 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-356573 --alsologtostderr -v=1
E0831 00:07:13.529067 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-356573 -n embed-certs-356573
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-356573 -n embed-certs-356573: exit status 2 (433.799497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-356573 -n embed-certs-356573
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-356573 -n embed-certs-356573: exit status 2 (420.93009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-356573 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-356573 -n embed-certs-356573
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-356573 -n embed-certs-356573
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-039210 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1
E0831 00:07:25.342283 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/functional-489151/client.crt: no such file or directory
E0831 00:07:29.271545 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
E0831 00:07:41.210938 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/old-k8s-version-184466/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-039210 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1: (53.921087302s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-spxdl" [45fe1cd0-495d-4bc8-ba2f-3969a5b1430a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-spxdl" [45fe1cd0-495d-4bc8-ba2f-3969a5b1430a] Running
E0831 00:08:02.149155 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/addons-435384/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.032423649s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-spxdl" [45fe1cd0-495d-4bc8-ba2f-3969a5b1430a] Running
E0831 00:08:06.304015 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/kindnet-504558/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011332399s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-441417 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-441417 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-441417 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-441417 --alsologtostderr -v=1: (1.183846565s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417: exit status 2 (544.589705ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417: exit status 2 (554.909428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-441417 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-441417 --alsologtostderr -v=1: (1.055452997s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-441417 -n default-k8s-diff-port-441417
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-039210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-039210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.167843135s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-039210 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-039210 --alsologtostderr -v=3: (11.317494483s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-039210 -n newest-cni-039210
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-039210 -n newest-cni-039210: exit status 7 (85.951156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-039210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-039210 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1
E0831 00:08:28.304204 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/auto-504558/client.crt: no such file or directory
E0831 00:08:48.663714 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/no-preload-850853/client.crt: no such file or directory
E0831 00:08:52.314356 1502303 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17114-1496922/.minikube/profiles/false-504558/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-039210 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.1: (30.157840197s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-039210 -n newest-cni-039210
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-039210 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-039210 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-039210 -n newest-cni-039210
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-039210 -n newest-cni-039210: exit status 2 (346.218267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-039210 -n newest-cni-039210
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-039210 -n newest-cni-039210: exit status 2 (349.985411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-039210 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-039210 -n newest-cni-039210
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-039210 -n newest-cni-039210
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.97s)

                                                
                                    

Test skip (24/319)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-024755 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-024755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-024755
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-504558 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-504558" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-504558

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-504558" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-504558"

                                                
                                                
----------------------- debugLogs end: cilium-504558 [took: 5.597533014s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-504558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-504558
--- SKIP: TestNetworkPlugins/group/cilium (5.84s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-638452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-638452
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard