Test Report: Docker_Linux_containerd_arm64 17957

                    
                      89df817c127b40a78141e8021123a5a55115ceb7:2024-01-15:32713
                    
                

Test fail (8/320)

x
+
TestAddons/parallel/Ingress (37.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-916083 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-916083 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-916083 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c83bf54d-1f87-4fc4-847f-a47a08921ce0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c83bf54d-1f87-4fc4-847f-a47a08921ce0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.0045385s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-916083 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.074005914s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-916083 addons disable ingress-dns --alsologtostderr -v=1: (1.290636014s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-916083 addons disable ingress --alsologtostderr -v=1: (7.880473825s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-916083
helpers_test.go:235: (dbg) docker inspect addons-916083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f",
	        "Created": "2024-01-15T14:01:51.89005836Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4002632,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T14:01:52.227859779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f/hostname",
	        "HostsPath": "/var/lib/docker/containers/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f/hosts",
	        "LogPath": "/var/lib/docker/containers/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f-json.log",
	        "Name": "/addons-916083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-916083:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-916083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf9d963cc8d65c88e7f016d0d91e93db2454a4a480880e388b87046f7a5fabdd-init/diff:/var/lib/docker/overlay2/37735672df261a15b7a2ba1989e6f3a0906a58ecd248d26a2bc61e23d88a15c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf9d963cc8d65c88e7f016d0d91e93db2454a4a480880e388b87046f7a5fabdd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf9d963cc8d65c88e7f016d0d91e93db2454a4a480880e388b87046f7a5fabdd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf9d963cc8d65c88e7f016d0d91e93db2454a4a480880e388b87046f7a5fabdd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-916083",
	                "Source": "/var/lib/docker/volumes/addons-916083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-916083",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-916083",
	                "name.minikube.sigs.k8s.io": "addons-916083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b386c2dad80c227c1a8f98d67fc82d80a4b8b592f3166fa0a1f0e4072d0c5a6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36439"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36438"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36435"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36436"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b386c2dad80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-916083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74cf3f25b39a",
	                        "addons-916083"
	                    ],
	                    "NetworkID": "df7f910ab822e8bb791b6bacf9aafc3fb36a7a28df4815084863cbae77a7a61b",
	                    "EndpointID": "1bcc56304f7d3df50b2a337c191736660012f7230dca2d9051d1af9ab4ac67e6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-916083 -n addons-916083
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-916083 logs -n 25: (1.665416843s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-851187              | download-only-851187   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| start   | -o=json --download-only              | download-only-168263   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | -p download-only-168263              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-168263              | download-only-168263   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-450455              | download-only-450455   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-851187              | download-only-851187   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-168263              | download-only-168263   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| start   | --download-only -p                   | download-docker-152127 | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | download-docker-152127               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-152127            | download-docker-152127 | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| start   | --download-only -p                   | binary-mirror-093958   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | binary-mirror-093958                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41435               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-093958              | binary-mirror-093958   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| addons  | enable dashboard -p                  | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | addons-916083                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | addons-916083                        |                        |         |         |                     |                     |
	| start   | -p addons-916083 --wait=true         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:03 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-916083 ip                     | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	| addons  | addons-916083 addons disable         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-916083 addons                 | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | addons-916083                        |                        |         |         |                     |                     |
	| ssh     | addons-916083 ssh curl -s            | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-916083 ip                     | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	| addons  | addons-916083 addons                 | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-916083 addons disable         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-916083 addons disable         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-916083 addons                 | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 14:01:28
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 14:01:28.472523 4002183 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:01:28.472713 4002183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:01:28.472738 4002183 out.go:309] Setting ErrFile to fd 2...
	I0115 14:01:28.472757 4002183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:01:28.473017 4002183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:01:28.473532 4002183 out.go:303] Setting JSON to false
	I0115 14:01:28.474413 4002183 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":67432,"bootTime":1705259857,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:01:28.474516 4002183 start.go:138] virtualization:  
	I0115 14:01:28.477100 4002183 out.go:177] * [addons-916083] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 14:01:28.479340 4002183 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 14:01:28.481223 4002183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:01:28.479493 4002183 notify.go:220] Checking for updates...
	I0115 14:01:28.483312 4002183 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:01:28.485274 4002183 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:01:28.487226 4002183 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 14:01:28.489055 4002183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 14:01:28.491453 4002183 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:01:28.515408 4002183 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:01:28.515557 4002183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:01:28.594659 4002183 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 14:01:28.584754318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:01:28.594766 4002183 docker.go:295] overlay module found
	I0115 14:01:28.596894 4002183 out.go:177] * Using the docker driver based on user configuration
	I0115 14:01:28.598594 4002183 start.go:298] selected driver: docker
	I0115 14:01:28.598623 4002183 start.go:902] validating driver "docker" against <nil>
	I0115 14:01:28.598637 4002183 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 14:01:28.599304 4002183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:01:28.671259 4002183 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 14:01:28.661566705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:01:28.671438 4002183 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 14:01:28.671696 4002183 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 14:01:28.673744 4002183 out.go:177] * Using Docker driver with root privileges
	I0115 14:01:28.675791 4002183 cni.go:84] Creating CNI manager for ""
	I0115 14:01:28.675854 4002183 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:01:28.675871 4002183 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 14:01:28.675886 4002183 start_flags.go:321] config:
	{Name:addons-916083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-916083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:01:28.678493 4002183 out.go:177] * Starting control plane node addons-916083 in cluster addons-916083
	I0115 14:01:28.680553 4002183 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0115 14:01:28.682736 4002183 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 14:01:28.684812 4002183 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 14:01:28.684872 4002183 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0115 14:01:28.684885 4002183 cache.go:56] Caching tarball of preloaded images
	I0115 14:01:28.684914 4002183 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 14:01:28.684972 4002183 preload.go:174] Found /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0115 14:01:28.684982 4002183 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0115 14:01:28.685352 4002183 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/config.json ...
	I0115 14:01:28.685379 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/config.json: {Name:mk92c7fbdca34bd5c56edbab295eadcbe0b00279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:28.702207 4002183 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 14:01:28.702321 4002183 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 14:01:28.702340 4002183 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 14:01:28.702344 4002183 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 14:01:28.702355 4002183 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 14:01:28.702361 4002183 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0115 14:01:44.394325 4002183 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0115 14:01:44.394365 4002183 cache.go:194] Successfully downloaded all kic artifacts
	I0115 14:01:44.394443 4002183 start.go:365] acquiring machines lock for addons-916083: {Name:mk4ca45dcb3f98d8bf4134cef8afee4f8ad9a7b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 14:01:44.394567 4002183 start.go:369] acquired machines lock for "addons-916083" in 101.454µs
	I0115 14:01:44.394597 4002183 start.go:93] Provisioning new machine with config: &{Name:addons-916083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-916083 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 14:01:44.394681 4002183 start.go:125] createHost starting for "" (driver="docker")
	I0115 14:01:44.397178 4002183 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0115 14:01:44.397424 4002183 start.go:159] libmachine.API.Create for "addons-916083" (driver="docker")
	I0115 14:01:44.397454 4002183 client.go:168] LocalClient.Create starting
	I0115 14:01:44.397582 4002183 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem
	I0115 14:01:44.600775 4002183 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem
	I0115 14:01:45.678440 4002183 cli_runner.go:164] Run: docker network inspect addons-916083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 14:01:45.695291 4002183 cli_runner.go:211] docker network inspect addons-916083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 14:01:45.695386 4002183 network_create.go:281] running [docker network inspect addons-916083] to gather additional debugging logs...
	I0115 14:01:45.695410 4002183 cli_runner.go:164] Run: docker network inspect addons-916083
	W0115 14:01:45.711857 4002183 cli_runner.go:211] docker network inspect addons-916083 returned with exit code 1
	I0115 14:01:45.711891 4002183 network_create.go:284] error running [docker network inspect addons-916083]: docker network inspect addons-916083: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-916083 not found
	I0115 14:01:45.711916 4002183 network_create.go:286] output of [docker network inspect addons-916083]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-916083 not found
	
	** /stderr **
	I0115 14:01:45.712032 4002183 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 14:01:45.729595 4002183 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400258b090}
	I0115 14:01:45.729632 4002183 network_create.go:124] attempt to create docker network addons-916083 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0115 14:01:45.729691 4002183 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-916083 addons-916083
	I0115 14:01:45.802119 4002183 network_create.go:108] docker network addons-916083 192.168.49.0/24 created
	I0115 14:01:45.802157 4002183 kic.go:121] calculated static IP "192.168.49.2" for the "addons-916083" container
	I0115 14:01:45.802232 4002183 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 14:01:45.818622 4002183 cli_runner.go:164] Run: docker volume create addons-916083 --label name.minikube.sigs.k8s.io=addons-916083 --label created_by.minikube.sigs.k8s.io=true
	I0115 14:01:45.837648 4002183 oci.go:103] Successfully created a docker volume addons-916083
	I0115 14:01:45.837751 4002183 cli_runner.go:164] Run: docker run --rm --name addons-916083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916083 --entrypoint /usr/bin/test -v addons-916083:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 14:01:47.640508 4002183 cli_runner.go:217] Completed: docker run --rm --name addons-916083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916083 --entrypoint /usr/bin/test -v addons-916083:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.802715553s)
	I0115 14:01:47.640539 4002183 oci.go:107] Successfully prepared a docker volume addons-916083
	I0115 14:01:47.640566 4002183 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 14:01:47.640585 4002183 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 14:01:47.640674 4002183 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-916083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 14:01:51.805673 4002183 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-916083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.164943924s)
	I0115 14:01:51.805717 4002183 kic.go:203] duration metric: took 4.165129 seconds to extract preloaded images to volume
	W0115 14:01:51.805857 4002183 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 14:01:51.805974 4002183 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 14:01:51.873950 4002183 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-916083 --name addons-916083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-916083 --network addons-916083 --ip 192.168.49.2 --volume addons-916083:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 14:01:52.236227 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Running}}
	I0115 14:01:52.255087 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:01:52.275087 4002183 cli_runner.go:164] Run: docker exec addons-916083 stat /var/lib/dpkg/alternatives/iptables
	I0115 14:01:52.345975 4002183 oci.go:144] the created container "addons-916083" has a running status.
	I0115 14:01:52.346008 4002183 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa...
	I0115 14:01:52.848644 4002183 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 14:01:52.893489 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:01:52.933978 4002183 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 14:01:52.934003 4002183 kic_runner.go:114] Args: [docker exec --privileged addons-916083 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 14:01:53.012652 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:01:53.050562 4002183 machine.go:88] provisioning docker machine ...
	I0115 14:01:53.050592 4002183 ubuntu.go:169] provisioning hostname "addons-916083"
	I0115 14:01:53.050662 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:53.083599 4002183 main.go:141] libmachine: Using SSH client type: native
	I0115 14:01:53.084113 4002183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 36439 <nil> <nil>}
	I0115 14:01:53.084132 4002183 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-916083 && echo "addons-916083" | sudo tee /etc/hostname
	I0115 14:01:53.286918 4002183 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-916083
	
	I0115 14:01:53.287100 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:53.310525 4002183 main.go:141] libmachine: Using SSH client type: native
	I0115 14:01:53.310939 4002183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 36439 <nil> <nil>}
	I0115 14:01:53.310957 4002183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-916083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-916083/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-916083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 14:01:53.464605 4002183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 14:01:53.464639 4002183 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17957-3996034/.minikube CaCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17957-3996034/.minikube}
	I0115 14:01:53.464678 4002183 ubuntu.go:177] setting up certificates
	I0115 14:01:53.464687 4002183 provision.go:83] configureAuth start
	I0115 14:01:53.464750 4002183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916083
	I0115 14:01:53.486711 4002183 provision.go:138] copyHostCerts
	I0115 14:01:53.486802 4002183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.pem (1082 bytes)
	I0115 14:01:53.486961 4002183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/cert.pem (1123 bytes)
	I0115 14:01:53.487029 4002183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/key.pem (1679 bytes)
	I0115 14:01:53.487078 4002183 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem org=jenkins.addons-916083 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-916083]
	I0115 14:01:53.760942 4002183 provision.go:172] copyRemoteCerts
	I0115 14:01:53.761033 4002183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 14:01:53.761082 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:53.780076 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:53.878254 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 14:01:53.906918 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0115 14:01:53.935967 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 14:01:53.964554 4002183 provision.go:86] duration metric: configureAuth took 499.852546ms
	I0115 14:01:53.964588 4002183 ubuntu.go:193] setting minikube options for container-runtime
	I0115 14:01:53.964792 4002183 config.go:182] Loaded profile config "addons-916083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:01:53.964804 4002183 machine.go:91] provisioned docker machine in 914.225268ms
	I0115 14:01:53.964811 4002183 client.go:171] LocalClient.Create took 9.567349931s
	I0115 14:01:53.964823 4002183 start.go:167] duration metric: libmachine.API.Create for "addons-916083" took 9.567401852s
	I0115 14:01:53.964835 4002183 start.go:300] post-start starting for "addons-916083" (driver="docker")
	I0115 14:01:53.964850 4002183 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 14:01:53.964909 4002183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 14:01:53.964986 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:53.982347 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:54.082281 4002183 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 14:01:54.086494 4002183 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 14:01:54.086532 4002183 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 14:01:54.086544 4002183 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 14:01:54.086552 4002183 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 14:01:54.086563 4002183 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-3996034/.minikube/addons for local assets ...
	I0115 14:01:54.086637 4002183 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-3996034/.minikube/files for local assets ...
	I0115 14:01:54.086663 4002183 start.go:303] post-start completed in 121.819581ms
	I0115 14:01:54.087006 4002183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916083
	I0115 14:01:54.104494 4002183 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/config.json ...
	I0115 14:01:54.104783 4002183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 14:01:54.104832 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:54.122498 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:54.217303 4002183 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 14:01:54.222931 4002183 start.go:128] duration metric: createHost completed in 9.828233949s
	I0115 14:01:54.222963 4002183 start.go:83] releasing machines lock for "addons-916083", held for 9.828382958s
	I0115 14:01:54.223040 4002183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916083
	I0115 14:01:54.240410 4002183 ssh_runner.go:195] Run: cat /version.json
	I0115 14:01:54.240431 4002183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 14:01:54.240467 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:54.240497 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:54.259280 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:54.261640 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:54.356002 4002183 ssh_runner.go:195] Run: systemctl --version
	I0115 14:01:54.493994 4002183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 14:01:54.499713 4002183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0115 14:01:54.530037 4002183 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0115 14:01:54.530119 4002183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 14:01:54.564612 4002183 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 14:01:54.564645 4002183 start.go:475] detecting cgroup driver to use...
	I0115 14:01:54.564679 4002183 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 14:01:54.564742 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 14:01:54.578868 4002183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 14:01:54.592358 4002183 docker.go:217] disabling cri-docker service (if available) ...
	I0115 14:01:54.592475 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 14:01:54.608234 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 14:01:54.624061 4002183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 14:01:54.726895 4002183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 14:01:54.835581 4002183 docker.go:233] disabling docker service ...
	I0115 14:01:54.835648 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 14:01:54.856768 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 14:01:54.872078 4002183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 14:01:54.970760 4002183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 14:01:55.073684 4002183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 14:01:55.088031 4002183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 14:01:55.108336 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 14:01:55.120357 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 14:01:55.132898 4002183 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 14:01:55.132965 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 14:01:55.146020 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 14:01:55.158173 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 14:01:55.170328 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 14:01:55.182378 4002183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 14:01:55.193366 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 14:01:55.205089 4002183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 14:01:55.215464 4002183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 14:01:55.225903 4002183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 14:01:55.326835 4002183 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 14:01:55.473674 4002183 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 14:01:55.473756 4002183 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 14:01:55.478419 4002183 start.go:543] Will wait 60s for crictl version
	I0115 14:01:55.478485 4002183 ssh_runner.go:195] Run: which crictl
	I0115 14:01:55.482810 4002183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 14:01:55.526483 4002183 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0115 14:01:55.526569 4002183 ssh_runner.go:195] Run: containerd --version
	I0115 14:01:55.558666 4002183 ssh_runner.go:195] Run: containerd --version
	I0115 14:01:55.594261 4002183 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0115 14:01:55.596245 4002183 cli_runner.go:164] Run: docker network inspect addons-916083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 14:01:55.613131 4002183 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0115 14:01:55.617698 4002183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 14:01:55.631329 4002183 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 14:01:55.631423 4002183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 14:01:55.674730 4002183 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 14:01:55.674757 4002183 containerd.go:519] Images already preloaded, skipping extraction
	I0115 14:01:55.674815 4002183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 14:01:55.714649 4002183 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 14:01:55.714674 4002183 cache_images.go:84] Images are preloaded, skipping loading
	I0115 14:01:55.714741 4002183 ssh_runner.go:195] Run: sudo crictl info
	I0115 14:01:55.755731 4002183 cni.go:84] Creating CNI manager for ""
	I0115 14:01:55.755757 4002183 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:01:55.755786 4002183 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 14:01:55.755804 4002183 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-916083 NodeName:addons-916083 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 14:01:55.755935 4002183 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-916083"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 14:01:55.756003 4002183 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-916083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-916083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 14:01:55.756069 4002183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 14:01:55.766669 4002183 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 14:01:55.766775 4002183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 14:01:55.777069 4002183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0115 14:01:55.798058 4002183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 14:01:55.818887 4002183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0115 14:01:55.839527 4002183 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 14:01:55.843794 4002183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 14:01:55.856774 4002183 certs.go:56] Setting up /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083 for IP: 192.168.49.2
	I0115 14:01:55.856808 4002183 certs.go:190] acquiring lock for shared ca certs: {Name:mk9e910b1d22df90feaffa3b68f77c94f902dcfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:55.856937 4002183 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key
	I0115 14:01:56.365558 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt ...
	I0115 14:01:56.365589 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt: {Name:mk9316865b0b0941ddfd00975a3bc8e7a0880170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.365795 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key ...
	I0115 14:01:56.365809 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key: {Name:mk154151ca5d9b8cca9e9c2d0311b4724132fce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.365895 4002183 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key
	I0115 14:01:56.598002 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.crt ...
	I0115 14:01:56.598030 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.crt: {Name:mk840a90585cdf3c26c2e019ac23ab831ac23f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.598205 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key ...
	I0115 14:01:56.598216 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key: {Name:mkff8a5cf2f609e63496d40510f33a3131dec2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.598333 4002183 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.key
	I0115 14:01:56.598348 4002183 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt with IP's: []
	I0115 14:01:56.801804 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt ...
	I0115 14:01:56.801835 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: {Name:mk81eda9512287f09041d3cbe740f7cff0d6ddc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.802017 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.key ...
	I0115 14:01:56.802029 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.key: {Name:mkc9faf286c154cb994c3becb8a3ed3476eae285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.802648 4002183 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key.dd3b5fb2
	I0115 14:01:56.802672 4002183 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 14:01:57.038559 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt.dd3b5fb2 ...
	I0115 14:01:57.038598 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt.dd3b5fb2: {Name:mkf606d0b807afb756347bd3c22025099a5ff12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:57.038809 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key.dd3b5fb2 ...
	I0115 14:01:57.038825 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key.dd3b5fb2: {Name:mk62dbb3ce4d8e00be64a3f5a490d58f50d25a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:57.039457 4002183 certs.go:337] copying /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt
	I0115 14:01:57.039546 4002183 certs.go:341] copying /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key
	I0115 14:01:57.039598 4002183 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.key
	I0115 14:01:57.039620 4002183 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.crt with IP's: []
	I0115 14:01:57.676232 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.crt ...
	I0115 14:01:57.676263 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.crt: {Name:mk5d6f5b33710a8dc7ecc907fa9718898af26a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:57.676451 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.key ...
	I0115 14:01:57.676465 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.key: {Name:mka07b3b18353f4266ea68729926c05d85e25671 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:57.676660 4002183 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem (1675 bytes)
	I0115 14:01:57.676708 4002183 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem (1082 bytes)
	I0115 14:01:57.676742 4002183 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem (1123 bytes)
	I0115 14:01:57.676771 4002183 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem (1679 bytes)
	I0115 14:01:57.677415 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 14:01:57.705801 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 14:01:57.734680 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 14:01:57.763596 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 14:01:57.792049 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 14:01:57.822918 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0115 14:01:57.851756 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 14:01:57.880260 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 14:01:57.908723 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 14:01:57.939016 4002183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 14:01:57.961996 4002183 ssh_runner.go:195] Run: openssl version
	I0115 14:01:57.969405 4002183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 14:01:57.981095 4002183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:01:57.985968 4002183 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 14:01 /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:01:57.986083 4002183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:01:57.994558 4002183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 14:01:58.006782 4002183 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 14:01:58.011446 4002183 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 14:01:58.011512 4002183 kubeadm.go:404] StartCluster: {Name:addons-916083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-916083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:01:58.011598 4002183 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 14:01:58.011689 4002183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 14:01:58.054429 4002183 cri.go:89] found id: ""
	I0115 14:01:58.054510 4002183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 14:01:58.065307 4002183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 14:01:58.076395 4002183 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 14:01:58.076484 4002183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 14:01:58.087298 4002183 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 14:01:58.087374 4002183 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 14:01:58.149228 4002183 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 14:01:58.149608 4002183 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 14:01:58.197654 4002183 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 14:01:58.197728 4002183 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0115 14:01:58.197770 4002183 kubeadm.go:322] OS: Linux
	I0115 14:01:58.197821 4002183 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 14:01:58.197871 4002183 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 14:01:58.197919 4002183 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 14:01:58.197968 4002183 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 14:01:58.198017 4002183 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 14:01:58.198066 4002183 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 14:01:58.198112 4002183 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0115 14:01:58.198160 4002183 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0115 14:01:58.198207 4002183 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0115 14:01:58.286461 4002183 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 14:01:58.286629 4002183 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 14:01:58.286763 4002183 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 14:01:58.532870 4002183 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 14:01:58.535222 4002183 out.go:204]   - Generating certificates and keys ...
	I0115 14:01:58.535436 4002183 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 14:01:58.535547 4002183 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 14:01:58.914220 4002183 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 14:01:59.518539 4002183 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 14:02:00.244843 4002183 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 14:02:01.709142 4002183 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 14:02:02.064867 4002183 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 14:02:02.065005 4002183 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-916083 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 14:02:02.403222 4002183 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 14:02:02.403366 4002183 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-916083 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 14:02:02.866282 4002183 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 14:02:04.731843 4002183 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 14:02:05.086262 4002183 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 14:02:05.086556 4002183 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 14:02:06.248847 4002183 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 14:02:06.641936 4002183 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 14:02:07.211748 4002183 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 14:02:07.385460 4002183 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 14:02:07.386082 4002183 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 14:02:07.388801 4002183 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 14:02:07.391413 4002183 out.go:204]   - Booting up control plane ...
	I0115 14:02:07.391516 4002183 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 14:02:07.391590 4002183 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 14:02:07.392914 4002183 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 14:02:07.407921 4002183 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 14:02:07.409519 4002183 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 14:02:07.409952 4002183 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 14:02:07.515312 4002183 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 14:02:15.018215 4002183 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502932 seconds
	I0115 14:02:15.018344 4002183 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 14:02:15.034313 4002183 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 14:02:15.560340 4002183 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 14:02:15.560530 4002183 kubeadm.go:322] [mark-control-plane] Marking the node addons-916083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 14:02:16.071665 4002183 kubeadm.go:322] [bootstrap-token] Using token: s4mcyt.e4j4waoo0vgsvs3m
	I0115 14:02:16.073687 4002183 out.go:204]   - Configuring RBAC rules ...
	I0115 14:02:16.073815 4002183 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 14:02:16.078902 4002183 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 14:02:16.087163 4002183 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 14:02:16.090901 4002183 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 14:02:16.095692 4002183 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 14:02:16.100936 4002183 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 14:02:16.115637 4002183 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 14:02:16.361487 4002183 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 14:02:16.483057 4002183 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 14:02:16.484286 4002183 kubeadm.go:322] 
	I0115 14:02:16.484359 4002183 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 14:02:16.484373 4002183 kubeadm.go:322] 
	I0115 14:02:16.484447 4002183 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 14:02:16.484456 4002183 kubeadm.go:322] 
	I0115 14:02:16.484481 4002183 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 14:02:16.484704 4002183 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 14:02:16.484764 4002183 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 14:02:16.484777 4002183 kubeadm.go:322] 
	I0115 14:02:16.484829 4002183 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 14:02:16.484838 4002183 kubeadm.go:322] 
	I0115 14:02:16.484883 4002183 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 14:02:16.484892 4002183 kubeadm.go:322] 
	I0115 14:02:16.484942 4002183 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 14:02:16.485016 4002183 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 14:02:16.485087 4002183 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 14:02:16.485097 4002183 kubeadm.go:322] 
	I0115 14:02:16.485176 4002183 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 14:02:16.485251 4002183 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 14:02:16.485260 4002183 kubeadm.go:322] 
	I0115 14:02:16.485350 4002183 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token s4mcyt.e4j4waoo0vgsvs3m \
	I0115 14:02:16.485452 4002183 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7a6d785f4518c70e5cb54aff2b25c2e4257d667a1215c730d9bd23381d7f6388 \
	I0115 14:02:16.485477 4002183 kubeadm.go:322] 	--control-plane 
	I0115 14:02:16.485482 4002183 kubeadm.go:322] 
	I0115 14:02:16.485568 4002183 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 14:02:16.485579 4002183 kubeadm.go:322] 
	I0115 14:02:16.485657 4002183 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token s4mcyt.e4j4waoo0vgsvs3m \
	I0115 14:02:16.485756 4002183 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7a6d785f4518c70e5cb54aff2b25c2e4257d667a1215c730d9bd23381d7f6388 
	I0115 14:02:16.488943 4002183 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0115 14:02:16.489056 4002183 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 14:02:16.489078 4002183 cni.go:84] Creating CNI manager for ""
	I0115 14:02:16.489087 4002183 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:02:16.491454 4002183 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 14:02:16.493520 4002183 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 14:02:16.503580 4002183 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 14:02:16.503600 4002183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 14:02:16.534027 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 14:02:17.433303 4002183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 14:02:17.433490 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:17.433614 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e minikube.k8s.io/name=addons-916083 minikube.k8s.io/updated_at=2024_01_15T14_02_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:17.451120 4002183 ops.go:34] apiserver oom_adj: -16
	I0115 14:02:17.584748 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:18.084811 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:18.585426 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:19.085032 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:19.585055 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:20.084873 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:20.584886 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:21.085232 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:21.585555 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:22.084918 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:22.584862 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:23.084907 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:23.585605 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:24.085758 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:24.585751 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:25.084885 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:25.585620 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:26.085764 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:26.584873 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:27.085436 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:27.585242 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:28.085044 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:28.584952 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:29.084865 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:29.585788 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:30.085156 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:30.179004 4002183 kubeadm.go:1088] duration metric: took 12.745572227s to wait for elevateKubeSystemPrivileges.
	I0115 14:02:30.179037 4002183 kubeadm.go:406] StartCluster complete in 32.167547669s
	I0115 14:02:30.179054 4002183 settings.go:142] acquiring lock: {Name:mkf7c3579062a76dbc15f21d34a0f70748bbdf8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:02:30.179796 4002183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:02:30.180210 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/kubeconfig: {Name:mk3afa6cfd54a2e8849d9a076ecc839592eb1132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:02:30.180970 4002183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 14:02:30.181261 4002183 config.go:182] Loaded profile config "addons-916083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:02:30.181381 4002183 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0115 14:02:30.181465 4002183 addons.go:69] Setting yakd=true in profile "addons-916083"
	I0115 14:02:30.181482 4002183 addons.go:234] Setting addon yakd=true in "addons-916083"
	I0115 14:02:30.181538 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.182016 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.182485 4002183 addons.go:69] Setting cloud-spanner=true in profile "addons-916083"
	I0115 14:02:30.182503 4002183 addons.go:234] Setting addon cloud-spanner=true in "addons-916083"
	I0115 14:02:30.182535 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.182934 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.183358 4002183 addons.go:69] Setting metrics-server=true in profile "addons-916083"
	I0115 14:02:30.183381 4002183 addons.go:234] Setting addon metrics-server=true in "addons-916083"
	I0115 14:02:30.183413 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.183803 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.184229 4002183 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-916083"
	I0115 14:02:30.184270 4002183 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-916083"
	I0115 14:02:30.184300 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.184682 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.194292 4002183 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-916083"
	I0115 14:02:30.194584 4002183 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-916083"
	I0115 14:02:30.194641 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.197485 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.199331 4002183 addons.go:69] Setting default-storageclass=true in profile "addons-916083"
	I0115 14:02:30.199369 4002183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-916083"
	I0115 14:02:30.199678 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.206452 4002183 addons.go:69] Setting registry=true in profile "addons-916083"
	I0115 14:02:30.206527 4002183 addons.go:234] Setting addon registry=true in "addons-916083"
	I0115 14:02:30.206606 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.207083 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.216505 4002183 addons.go:69] Setting gcp-auth=true in profile "addons-916083"
	I0115 14:02:30.216544 4002183 mustload.go:65] Loading cluster: addons-916083
	I0115 14:02:30.216784 4002183 config.go:182] Loaded profile config "addons-916083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:02:30.217038 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.217467 4002183 addons.go:69] Setting storage-provisioner=true in profile "addons-916083"
	I0115 14:02:30.217490 4002183 addons.go:234] Setting addon storage-provisioner=true in "addons-916083"
	I0115 14:02:30.217539 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.217931 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.229688 4002183 addons.go:69] Setting ingress=true in profile "addons-916083"
	I0115 14:02:30.229782 4002183 addons.go:234] Setting addon ingress=true in "addons-916083"
	I0115 14:02:30.229900 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.230603 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.252461 4002183 addons.go:69] Setting ingress-dns=true in profile "addons-916083"
	I0115 14:02:30.252499 4002183 addons.go:234] Setting addon ingress-dns=true in "addons-916083"
	I0115 14:02:30.252553 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.253043 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.253339 4002183 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-916083"
	I0115 14:02:30.253377 4002183 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-916083"
	I0115 14:02:30.253677 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.264969 4002183 addons.go:69] Setting volumesnapshots=true in profile "addons-916083"
	I0115 14:02:30.265004 4002183 addons.go:234] Setting addon volumesnapshots=true in "addons-916083"
	I0115 14:02:30.265061 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.265523 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.265673 4002183 addons.go:69] Setting inspektor-gadget=true in profile "addons-916083"
	I0115 14:02:30.265688 4002183 addons.go:234] Setting addon inspektor-gadget=true in "addons-916083"
	I0115 14:02:30.265717 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.266084 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.427036 4002183 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0115 14:02:30.431026 4002183 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0115 14:02:30.434001 4002183 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0115 14:02:30.446147 4002183 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 14:02:30.446211 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0115 14:02:30.446304 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.431307 4002183 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 14:02:30.472179 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 14:02:30.472418 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.477935 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0115 14:02:30.482695 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0115 14:02:30.486142 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0115 14:02:30.489808 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0115 14:02:30.491901 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0115 14:02:30.494337 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0115 14:02:30.432454 4002183 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-916083"
	I0115 14:02:30.433977 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0115 14:02:30.482566 4002183 addons.go:234] Setting addon default-storageclass=true in "addons-916083"
	I0115 14:02:30.431314 4002183 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0115 14:02:30.494269 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.496175 4002183 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0115 14:02:30.496219 4002183 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0115 14:02:30.496249 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.496255 4002183 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 14:02:30.496266 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0115 14:02:30.499554 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.499566 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0115 14:02:30.499570 4002183 out.go:177]   - Using image docker.io/registry:2.8.3
	I0115 14:02:30.502537 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0115 14:02:30.503098 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.503137 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.504987 4002183 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0115 14:02:30.505502 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.507138 4002183 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0115 14:02:30.509486 4002183 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 14:02:30.511771 4002183 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 14:02:30.515221 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0115 14:02:30.515303 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0115 14:02:30.523784 4002183 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 14:02:30.523847 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0115 14:02:30.523854 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 14:02:30.531966 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0115 14:02:30.531989 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0115 14:02:30.534181 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.534191 4002183 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0115 14:02:30.543019 4002183 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 14:02:30.536720 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0115 14:02:30.536734 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0115 14:02:30.536801 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.536829 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.537558 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.550184 4002183 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 14:02:30.550273 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.582041 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0115 14:02:30.582067 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0115 14:02:30.582182 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.554095 4002183 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0115 14:02:30.605878 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0115 14:02:30.605958 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.623615 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.630924 4002183 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 14:02:30.630951 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 14:02:30.631016 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.578248 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0115 14:02:30.634644 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.645972 4002183 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0115 14:02:30.653849 4002183 out.go:177]   - Using image docker.io/busybox:stable
	I0115 14:02:30.655964 4002183 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 14:02:30.655987 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0115 14:02:30.656055 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.644671 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.695417 4002183 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-916083" context rescaled to 1 replicas
	I0115 14:02:30.695455 4002183 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 14:02:30.697631 4002183 out.go:177] * Verifying Kubernetes components...
	I0115 14:02:30.701895 4002183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 14:02:30.702352 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.791402 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.811387 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.831460 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.856328 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.868827 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.872813 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.879425 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.888950 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.907641 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.919157 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	W0115 14:02:30.920975 4002183 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0115 14:02:30.921008 4002183 retry.go:31] will retry after 319.272093ms: ssh: handshake failed: EOF
	I0115 14:02:31.067250 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 14:02:31.075713 4002183 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 14:02:31.075776 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0115 14:02:31.109908 4002183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 14:02:31.110818 4002183 node_ready.go:35] waiting up to 6m0s for node "addons-916083" to be "Ready" ...
	I0115 14:02:31.115448 4002183 node_ready.go:49] node "addons-916083" has status "Ready":"True"
	I0115 14:02:31.115515 4002183 node_ready.go:38] duration metric: took 4.605127ms waiting for node "addons-916083" to be "Ready" ...
	I0115 14:02:31.115539 4002183 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 14:02:31.124610 4002183 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pkbg" in "kube-system" namespace to be "Ready" ...
	I0115 14:02:31.217234 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0115 14:02:31.217384 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0115 14:02:31.237417 4002183 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 14:02:31.237487 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 14:02:31.257409 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 14:02:31.279060 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0115 14:02:31.279084 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0115 14:02:31.288406 4002183 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0115 14:02:31.288473 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0115 14:02:31.397940 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 14:02:31.439013 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 14:02:31.576583 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0115 14:02:31.576659 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0115 14:02:31.603643 4002183 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 14:02:31.603726 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 14:02:31.615809 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0115 14:02:31.615880 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0115 14:02:31.639514 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0115 14:02:31.740479 4002183 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0115 14:02:31.740553 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0115 14:02:31.765595 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0115 14:02:31.765668 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0115 14:02:31.772237 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 14:02:31.774208 4002183 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0115 14:02:31.774263 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0115 14:02:31.866153 4002183 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0115 14:02:31.866217 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0115 14:02:31.880021 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0115 14:02:31.880094 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0115 14:02:31.893082 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 14:02:31.929612 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 14:02:31.998791 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0115 14:02:31.998812 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0115 14:02:32.004920 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0115 14:02:32.069926 4002183 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0115 14:02:32.069959 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0115 14:02:32.091798 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0115 14:02:32.091869 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0115 14:02:32.128238 4002183 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6pkbg" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6pkbg" not found
	I0115 14:02:32.128315 4002183 pod_ready.go:81] duration metric: took 1.003632118s waiting for pod "coredns-5dd5756b68-6pkbg" in "kube-system" namespace to be "Ready" ...
	E0115 14:02:32.128340 4002183 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6pkbg" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6pkbg" not found
	I0115 14:02:32.128360 4002183 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace to be "Ready" ...
	I0115 14:02:32.173025 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0115 14:02:32.173096 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0115 14:02:32.333659 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0115 14:02:32.333729 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0115 14:02:32.337879 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0115 14:02:32.337942 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0115 14:02:32.429002 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0115 14:02:32.429063 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0115 14:02:32.458649 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0115 14:02:32.458716 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0115 14:02:32.639091 4002183 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 14:02:32.639165 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0115 14:02:32.651844 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0115 14:02:32.719273 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0115 14:02:32.719300 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0115 14:02:32.763014 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0115 14:02:32.763041 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0115 14:02:32.945009 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0115 14:02:32.945036 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0115 14:02:32.967873 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 14:02:33.104524 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0115 14:02:33.104549 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0115 14:02:33.298765 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 14:02:33.298791 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0115 14:02:33.385597 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0115 14:02:33.385628 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0115 14:02:33.485171 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 14:02:33.597515 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0115 14:02:33.597542 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0115 14:02:33.826956 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0115 14:02:33.826982 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0115 14:02:34.136413 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:34.177295 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 14:02:34.177323 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0115 14:02:34.313976 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 14:02:34.431481 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.364198228s)
	I0115 14:02:34.431544 4002183 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.321563307s)
	I0115 14:02:34.431558 4002183 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0115 14:02:35.121313 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.863833163s)
	I0115 14:02:35.121403 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.723405041s)
	I0115 14:02:35.121632 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.682553576s)
	I0115 14:02:35.121681 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.482100843s)
	I0115 14:02:35.886024 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.113707315s)
	I0115 14:02:36.164815 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:37.316295 4002183 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0115 14:02:37.316379 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:37.359340 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:37.573960 4002183 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0115 14:02:37.644519 4002183 addons.go:234] Setting addon gcp-auth=true in "addons-916083"
	I0115 14:02:37.644607 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:37.645099 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:37.690416 4002183 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0115 14:02:37.690477 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:37.743440 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:38.442111 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.548944812s)
	I0115 14:02:38.442184 4002183 addons.go:470] Verifying addon ingress=true in "addons-916083"
	I0115 14:02:38.446729 4002183 out.go:177] * Verifying ingress addon...
	I0115 14:02:38.442425 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.512737431s)
	I0115 14:02:38.442465 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.43752348s)
	I0115 14:02:38.442506 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.790587215s)
	I0115 14:02:38.442580 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.474679805s)
	I0115 14:02:38.442652 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.95745613s)
	I0115 14:02:38.449018 4002183 addons.go:470] Verifying addon registry=true in "addons-916083"
	I0115 14:02:38.455589 4002183 out.go:177] * Verifying registry addon...
	W0115 14:02:38.449206 4002183 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 14:02:38.449428 4002183 addons.go:470] Verifying addon metrics-server=true in "addons-916083"
	I0115 14:02:38.450205 4002183 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0115 14:02:38.459344 4002183 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0115 14:02:38.459497 4002183 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-916083 service yakd-dashboard -n yakd-dashboard
	
	I0115 14:02:38.459566 4002183 retry.go:31] will retry after 306.433825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 14:02:38.464198 4002183 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0115 14:02:38.475568 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:38.466216 4002183 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 14:02:38.475593 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:38.637726 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:38.783122 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 14:02:38.966427 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:38.973621 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:39.490270 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:39.491614 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:39.965960 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.651934566s)
	I0115 14:02:39.966033 4002183 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-916083"
	I0115 14:02:39.968384 4002183 out.go:177] * Verifying csi-hostpath-driver addon...
	I0115 14:02:39.966225 4002183 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.275780011s)
	I0115 14:02:39.970904 4002183 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0115 14:02:39.971665 4002183 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0115 14:02:39.972935 4002183 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 14:02:39.974551 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:39.975450 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:39.976482 4002183 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0115 14:02:39.976573 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0115 14:02:39.994622 4002183 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 14:02:39.994687 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:40.057810 4002183 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0115 14:02:40.057885 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0115 14:02:40.128183 4002183 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 14:02:40.128257 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0115 14:02:40.202859 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 14:02:40.464698 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:40.467197 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:40.487538 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:40.540337 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.757168101s)
	I0115 14:02:40.965977 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:40.966380 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:40.980243 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:41.136767 4002183 addons.go:470] Verifying addon gcp-auth=true in "addons-916083"
	I0115 14:02:41.140244 4002183 out.go:177] * Verifying gcp-auth addon...
	I0115 14:02:41.143848 4002183 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0115 14:02:41.150059 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:41.150532 4002183 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0115 14:02:41.150551 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:41.471608 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:41.484048 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:41.485337 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:41.648008 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:41.965688 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:41.966843 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:41.979719 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:42.149317 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:42.466130 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:42.468536 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:42.479093 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:42.648948 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:42.962514 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:42.964819 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:42.979334 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:43.148780 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:43.463025 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:43.463908 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:43.478554 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:43.635497 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:43.648189 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:43.964033 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:43.965595 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:43.979992 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:44.147904 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:44.464533 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:44.467463 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:44.480006 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:44.648881 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:44.964848 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:44.967426 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:44.979380 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:45.149020 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:45.463360 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:45.466983 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:45.479138 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:45.635643 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:45.648712 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:45.965123 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:45.965642 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:45.979180 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:46.148529 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:46.463125 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:46.464459 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:46.479129 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:46.648730 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:46.963940 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:46.965097 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:46.979009 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:47.148585 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:47.464032 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:47.468388 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:47.483735 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:47.636949 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:47.649476 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:47.965060 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:47.966003 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:47.978606 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:48.148371 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:48.462780 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:48.465672 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:48.479160 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:48.647424 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:48.971833 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:48.972007 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:48.979327 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:49.148682 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:49.463782 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:49.464760 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:49.479793 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:49.648975 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:49.963149 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:49.965757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:49.979036 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:50.135860 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:50.147536 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:50.462787 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:50.464556 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:50.478400 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:50.647681 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:50.962629 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:50.963836 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:50.978605 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:51.147653 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:51.463735 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:51.464020 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:51.478921 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:51.647415 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:51.962569 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:51.964914 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:51.978519 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:52.147294 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:52.463596 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:52.465826 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:52.478423 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:52.634786 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:52.647942 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:52.964960 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:52.966109 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:52.978872 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:53.147465 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:53.464581 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:53.465678 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:53.478472 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:53.647376 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:53.964344 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:53.964663 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:53.978865 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:54.147589 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:54.464218 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:54.465157 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:54.478359 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:54.634909 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:54.647757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:54.964558 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:54.965997 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:54.978629 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:55.147469 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:55.464749 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:55.465893 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:55.480076 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:55.647462 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:55.962451 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:55.964595 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:55.978611 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:56.147553 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:56.463584 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:56.464975 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:56.478756 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:56.635132 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:56.647361 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:56.963538 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:56.965124 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:56.979172 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:57.147497 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:57.463865 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:57.464302 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:57.479427 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:57.648202 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:57.962757 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:57.963645 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:57.978937 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:58.148351 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:58.462753 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:58.464791 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:58.478418 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:58.647941 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:58.962253 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:58.964239 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:58.978704 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:59.134833 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:59.147845 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:59.462796 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:59.464898 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:59.477884 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:59.648408 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:59.962252 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:59.964691 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:59.978090 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:00.148508 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:00.463088 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:00.464099 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:00.479129 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:00.648201 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:00.963280 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:00.964150 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:00.978610 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:01.147163 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:01.465607 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:01.467438 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:01.479126 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:01.639383 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:01.648241 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:01.963893 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:01.964429 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:01.978960 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:02.149449 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:02.466254 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:02.467412 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:02.479519 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:02.648727 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:02.966791 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:02.967846 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:02.979875 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:03.147720 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:03.465652 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:03.466837 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:03.478712 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:03.648141 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:03.966441 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:03.967511 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:03.979682 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:04.136932 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:04.148324 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:04.464943 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:04.465866 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:04.478613 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:04.648029 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:04.963669 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:04.964535 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:04.978461 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:05.147838 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:05.464070 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:05.465291 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:05.479509 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:05.648729 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:05.968357 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:05.969305 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:05.980156 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:06.148033 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:06.462622 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:06.464996 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:06.478805 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:06.636037 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:06.648327 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:06.965290 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:06.967224 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:06.978793 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:07.148024 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:07.463703 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:07.464712 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:07.480115 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:07.649063 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:07.965754 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:07.966850 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:07.979869 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:08.147499 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:08.463673 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:08.465794 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:08.478121 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:08.647549 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:08.967099 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:08.968152 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:08.981447 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:09.135332 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:09.147118 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:09.464459 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:09.465037 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:09.478817 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:09.648244 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:09.964807 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:09.965667 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:09.979066 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:10.147571 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:10.463743 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:10.464582 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:10.481811 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:10.648197 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:10.985997 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:10.989596 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:10.995467 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:11.136784 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:11.148803 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:11.466785 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:11.468118 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:11.480715 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:11.648878 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:11.976133 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:11.976922 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:11.994571 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:12.138306 4002183 pod_ready.go:92] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.138382 4002183 pod_ready.go:81] duration metric: took 40.009981094s waiting for pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.138410 4002183 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.162682 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:12.164739 4002183 pod_ready.go:92] pod "etcd-addons-916083" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.164803 4002183 pod_ready.go:81] duration metric: took 26.371494ms waiting for pod "etcd-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.164833 4002183 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.181087 4002183 pod_ready.go:92] pod "kube-apiserver-addons-916083" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.181162 4002183 pod_ready.go:81] duration metric: took 16.306823ms waiting for pod "kube-apiserver-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.181189 4002183 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.199261 4002183 pod_ready.go:92] pod "kube-controller-manager-addons-916083" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.199329 4002183 pod_ready.go:81] duration metric: took 18.119037ms waiting for pod "kube-controller-manager-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.199356 4002183 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fs7hg" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.214973 4002183 pod_ready.go:92] pod "kube-proxy-fs7hg" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.215046 4002183 pod_ready.go:81] duration metric: took 15.655232ms waiting for pod "kube-proxy-fs7hg" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.215076 4002183 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.466644 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:12.467867 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:12.478755 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:12.532994 4002183 pod_ready.go:92] pod "kube-scheduler-addons-916083" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.533027 4002183 pod_ready.go:81] duration metric: took 317.928809ms waiting for pod "kube-scheduler-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.533038 4002183 pod_ready.go:38] duration metric: took 41.417475935s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 14:03:12.533052 4002183 api_server.go:52] waiting for apiserver process to appear ...
	I0115 14:03:12.533117 4002183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 14:03:12.551491 4002183 api_server.go:72] duration metric: took 41.856008876s to wait for apiserver process to appear ...
	I0115 14:03:12.551518 4002183 api_server.go:88] waiting for apiserver healthz status ...
	I0115 14:03:12.551539 4002183 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0115 14:03:12.561202 4002183 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0115 14:03:12.562577 4002183 api_server.go:141] control plane version: v1.28.4
	I0115 14:03:12.562604 4002183 api_server.go:131] duration metric: took 11.078928ms to wait for apiserver health ...
	I0115 14:03:12.562619 4002183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 14:03:12.648546 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:12.740629 4002183 system_pods.go:59] 18 kube-system pods found
	I0115 14:03:12.740667 4002183 system_pods.go:61] "coredns-5dd5756b68-nbgjt" [43a41f50-fc86-4450-92f0-647531dfb3a6] Running
	I0115 14:03:12.740678 4002183 system_pods.go:61] "csi-hostpath-attacher-0" [d28d28f1-dd34-4a36-b1c0-9a2f48d68a02] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 14:03:12.740687 4002183 system_pods.go:61] "csi-hostpath-resizer-0" [6450d9dd-310b-4f7e-8c36-9376ababd82d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 14:03:12.740698 4002183 system_pods.go:61] "csi-hostpathplugin-j5mdh" [a2652475-1a19-4825-b0df-29a7c90b5c6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 14:03:12.740707 4002183 system_pods.go:61] "etcd-addons-916083" [ecd374c3-b2bc-43f9-9ffb-f1e90f3e56a5] Running
	I0115 14:03:12.740713 4002183 system_pods.go:61] "kindnet-6r7md" [04c8cf3d-7c92-4d8a-a7e2-b7c376d3eb7b] Running
	I0115 14:03:12.740724 4002183 system_pods.go:61] "kube-apiserver-addons-916083" [5a630a36-4424-4b9e-9583-9bfe87adb3ff] Running
	I0115 14:03:12.740729 4002183 system_pods.go:61] "kube-controller-manager-addons-916083" [64ae8a0e-7851-490c-899a-d987c1708fa0] Running
	I0115 14:03:12.740737 4002183 system_pods.go:61] "kube-ingress-dns-minikube" [2106a856-cb65-4ce7-84ae-6bc223f27497] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0115 14:03:12.740747 4002183 system_pods.go:61] "kube-proxy-fs7hg" [e6f3d1de-7ff6-4630-b33c-5511a78fe470] Running
	I0115 14:03:12.740752 4002183 system_pods.go:61] "kube-scheduler-addons-916083" [8c5b2462-756f-45c2-bdb4-303bf46fa948] Running
	I0115 14:03:12.740759 4002183 system_pods.go:61] "metrics-server-7c66d45ddc-2qp4d" [d0a4b682-7faf-459b-a7d0-8873c8b2db17] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 14:03:12.740776 4002183 system_pods.go:61] "nvidia-device-plugin-daemonset-dj78p" [10888201-3bd5-457a-aa04-7bc6a2d2dc6a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0115 14:03:12.740784 4002183 system_pods.go:61] "registry-htcrm" [51ffa260-a633-46c3-8d2c-1a9690503666] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0115 14:03:12.740790 4002183 system_pods.go:61] "registry-proxy-74zd5" [f108cc01-7802-4b5f-8935-c829e0ac2f02] Running
	I0115 14:03:12.740798 4002183 system_pods.go:61] "snapshot-controller-58dbcc7b99-bzhl5" [dd38a8e6-1095-44b6-a257-7322dd8369e7] Running
	I0115 14:03:12.740803 4002183 system_pods.go:61] "snapshot-controller-58dbcc7b99-szsw9" [c542b9d8-bd4a-48a2-8471-8e1b6a2b2cf8] Running
	I0115 14:03:12.740811 4002183 system_pods.go:61] "storage-provisioner" [175a8490-3dc2-47a2-a5bf-54717b94f58b] Running
	I0115 14:03:12.740822 4002183 system_pods.go:74] duration metric: took 178.196575ms to wait for pod list to return data ...
	I0115 14:03:12.740831 4002183 default_sa.go:34] waiting for default service account to be created ...
	I0115 14:03:12.931643 4002183 default_sa.go:45] found service account: "default"
	I0115 14:03:12.931672 4002183 default_sa.go:55] duration metric: took 190.82905ms for default service account to be created ...
	I0115 14:03:12.931682 4002183 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 14:03:12.970241 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:12.971455 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:12.990736 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:13.140990 4002183 system_pods.go:86] 18 kube-system pods found
	I0115 14:03:13.141070 4002183 system_pods.go:89] "coredns-5dd5756b68-nbgjt" [43a41f50-fc86-4450-92f0-647531dfb3a6] Running
	I0115 14:03:13.141088 4002183 system_pods.go:89] "csi-hostpath-attacher-0" [d28d28f1-dd34-4a36-b1c0-9a2f48d68a02] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 14:03:13.141097 4002183 system_pods.go:89] "csi-hostpath-resizer-0" [6450d9dd-310b-4f7e-8c36-9376ababd82d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 14:03:13.141106 4002183 system_pods.go:89] "csi-hostpathplugin-j5mdh" [a2652475-1a19-4825-b0df-29a7c90b5c6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 14:03:13.141115 4002183 system_pods.go:89] "etcd-addons-916083" [ecd374c3-b2bc-43f9-9ffb-f1e90f3e56a5] Running
	I0115 14:03:13.141121 4002183 system_pods.go:89] "kindnet-6r7md" [04c8cf3d-7c92-4d8a-a7e2-b7c376d3eb7b] Running
	I0115 14:03:13.141129 4002183 system_pods.go:89] "kube-apiserver-addons-916083" [5a630a36-4424-4b9e-9583-9bfe87adb3ff] Running
	I0115 14:03:13.141136 4002183 system_pods.go:89] "kube-controller-manager-addons-916083" [64ae8a0e-7851-490c-899a-d987c1708fa0] Running
	I0115 14:03:13.141144 4002183 system_pods.go:89] "kube-ingress-dns-minikube" [2106a856-cb65-4ce7-84ae-6bc223f27497] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0115 14:03:13.141151 4002183 system_pods.go:89] "kube-proxy-fs7hg" [e6f3d1de-7ff6-4630-b33c-5511a78fe470] Running
	I0115 14:03:13.141159 4002183 system_pods.go:89] "kube-scheduler-addons-916083" [8c5b2462-756f-45c2-bdb4-303bf46fa948] Running
	I0115 14:03:13.141165 4002183 system_pods.go:89] "metrics-server-7c66d45ddc-2qp4d" [d0a4b682-7faf-459b-a7d0-8873c8b2db17] Running
	I0115 14:03:13.141176 4002183 system_pods.go:89] "nvidia-device-plugin-daemonset-dj78p" [10888201-3bd5-457a-aa04-7bc6a2d2dc6a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0115 14:03:13.141182 4002183 system_pods.go:89] "registry-htcrm" [51ffa260-a633-46c3-8d2c-1a9690503666] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0115 14:03:13.141194 4002183 system_pods.go:89] "registry-proxy-74zd5" [f108cc01-7802-4b5f-8935-c829e0ac2f02] Running
	I0115 14:03:13.141200 4002183 system_pods.go:89] "snapshot-controller-58dbcc7b99-bzhl5" [dd38a8e6-1095-44b6-a257-7322dd8369e7] Running
	I0115 14:03:13.141208 4002183 system_pods.go:89] "snapshot-controller-58dbcc7b99-szsw9" [c542b9d8-bd4a-48a2-8471-8e1b6a2b2cf8] Running
	I0115 14:03:13.141212 4002183 system_pods.go:89] "storage-provisioner" [175a8490-3dc2-47a2-a5bf-54717b94f58b] Running
	I0115 14:03:13.141219 4002183 system_pods.go:126] duration metric: took 209.53212ms to wait for k8s-apps to be running ...
	I0115 14:03:13.141336 4002183 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 14:03:13.141422 4002183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 14:03:13.148863 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:13.184887 4002183 system_svc.go:56] duration metric: took 43.542727ms WaitForService to wait for kubelet.
	I0115 14:03:13.184916 4002183 kubeadm.go:581] duration metric: took 42.489439437s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 14:03:13.184941 4002183 node_conditions.go:102] verifying NodePressure condition ...
	I0115 14:03:13.332174 4002183 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0115 14:03:13.332208 4002183 node_conditions.go:123] node cpu capacity is 2
	I0115 14:03:13.332222 4002183 node_conditions.go:105] duration metric: took 147.27564ms to run NodePressure ...
	I0115 14:03:13.332233 4002183 start.go:228] waiting for startup goroutines ...
	I0115 14:03:13.465734 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:13.466285 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:13.480178 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:13.648281 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:13.965029 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:13.966267 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:13.979961 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:14.148460 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:14.463008 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:14.465501 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:14.479070 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:14.647680 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:14.971737 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:14.972514 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:14.989073 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:15.148022 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:15.462717 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:15.465931 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:15.480166 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:15.648072 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:15.964704 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:15.967957 4002183 kapi.go:107] duration metric: took 37.508611211s to wait for kubernetes.io/minikube-addons=registry ...
	I0115 14:03:15.981103 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:16.148184 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:16.464011 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:16.478780 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:16.647405 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:16.971041 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:16.990931 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:17.147750 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:17.463766 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:17.479340 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:17.648152 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:17.964122 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:17.979620 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:18.148521 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:18.463611 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:18.479302 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:18.648430 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:18.970001 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:18.981634 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:19.148409 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:19.462840 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:19.479156 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:19.648031 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:19.963219 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:19.978654 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:20.148751 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:20.466790 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:20.479808 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:20.647602 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:20.962625 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:20.979978 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:21.148350 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:21.463225 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:21.478872 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:21.647688 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:21.962717 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:21.979583 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:22.148562 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:22.462911 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:22.479398 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:22.647817 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:22.963116 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:22.980014 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:23.147755 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:23.463329 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:23.479815 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:23.647377 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:23.962528 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:23.981757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:24.147873 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:24.463742 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:24.479065 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:24.648127 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:24.963178 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:24.978350 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:25.148528 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:25.462757 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:25.479585 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:25.648158 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:25.963231 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:25.981623 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:26.148368 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:26.466221 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:26.481212 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:26.648225 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:26.962357 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:26.979438 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:27.149005 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:27.462325 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:27.478639 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:27.648220 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:27.962482 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:27.978546 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:28.148719 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:28.464009 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:28.479929 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:28.648046 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:28.963321 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:28.978523 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:29.148347 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:29.462898 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:29.480304 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:29.647757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:29.963547 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:29.978969 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:30.147905 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:30.463502 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:30.479936 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:30.647965 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:30.963013 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:30.978116 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:31.147176 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:31.463576 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:31.479137 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:31.648043 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:31.963131 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:31.979071 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:32.147816 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:32.464083 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:32.478760 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:32.650823 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:32.963149 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:32.985880 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:33.147636 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:33.462820 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:33.478470 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:33.648550 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:33.963350 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:33.978781 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:34.148358 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:34.463204 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:34.478720 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:34.648522 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:34.963277 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:34.979651 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:35.147396 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:35.463292 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:35.478726 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:35.647372 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:35.963036 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:35.978557 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:36.148767 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:36.463438 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:36.479077 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:36.647689 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:36.963864 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:36.980093 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:37.147797 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:37.463522 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:37.482284 4002183 kapi.go:107] duration metric: took 57.51061781s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0115 14:03:37.648282 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:37.963271 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:38.148139 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:38.463388 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:38.648417 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:38.963148 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:39.147792 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:39.463103 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:39.648133 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:39.962894 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:40.147576 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:40.462680 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:40.647677 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:40.963082 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:41.147757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:41.463063 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:41.647633 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:41.962681 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:42.148960 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:42.464085 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:42.647794 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:42.963208 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:43.148814 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:43.463642 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:43.648147 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:43.963152 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:44.149049 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:44.462599 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:44.653408 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:44.964747 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:45.148889 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:45.463452 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:45.647768 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:45.963749 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:46.148914 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:46.465176 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:46.647979 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:46.963441 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:47.149365 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:47.463342 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:47.648482 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:47.964336 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:48.149279 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:48.464016 4002183 kapi.go:107] duration metric: took 1m10.013808165s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0115 14:03:48.647832 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:49.148654 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:49.647487 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:50.147585 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:50.647569 4002183 kapi.go:107] duration metric: took 1m9.503716965s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0115 14:03:50.649776 4002183 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-916083 cluster.
	I0115 14:03:50.651701 4002183 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0115 14:03:50.653504 4002183 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0115 14:03:50.655691 4002183 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0115 14:03:50.657725 4002183 addons.go:505] enable addons completed in 1m20.476337186s: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0115 14:03:50.657771 4002183 start.go:233] waiting for cluster config update ...
	I0115 14:03:50.657791 4002183 start.go:242] writing updated cluster config ...
	I0115 14:03:50.658094 4002183 ssh_runner.go:195] Run: rm -f paused
	I0115 14:03:51.002489 4002183 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 14:03:51.004648 4002183 out.go:177] * Done! kubectl is now configured to use "addons-916083" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	e72a9f7597941       dd1b12fcb6097       8 seconds ago        Exited              hello-world-app            2                   47ff6f58ce48a       hello-world-app-5d77478584-wqrqg
	044423708daeb       74077e780ec71       33 seconds ago       Running             nginx                      0                   4f91ae5950411       nginx
	cd8ec42b386f2       2a5f29343eb03       About a minute ago   Running             gcp-auth                   0                   5054f5f2561d3       gcp-auth-d4c87556c-kr5wf
	23a7123973ef9       af594c6a879f2       About a minute ago   Exited              patch                      2                   b633437005e96       ingress-nginx-admission-patch-qs9bf
	581091146b79d       20e3f2db01e81       About a minute ago   Running             yakd                       0                   f21746e6bb044       yakd-dashboard-9947fc6bf-hqlhr
	163ffe68b1e25       af594c6a879f2       About a minute ago   Exited              create                     0                   cf50bb977cfc3       ingress-nginx-admission-create-m2dh4
	6bc60720746e0       a8df1f5260cb4       About a minute ago   Running             nvidia-device-plugin-ctr   0                   41758e10550d1       nvidia-device-plugin-daemonset-dj78p
	ffc6fd4d1596d       a89778274bf53       About a minute ago   Running             cloud-spanner-emulator     0                   7659c9ad58d08       cloud-spanner-emulator-64c8c85f65-c8qfb
	0509b1d6488f6       97e04611ad434       About a minute ago   Running             coredns                    0                   0d2dd25a562ef       coredns-5dd5756b68-nbgjt
	8ca38fbc3ac83       7ce2150c8929b       About a minute ago   Running             local-path-provisioner     0                   2fc2f53014086       local-path-provisioner-78b46b4d5c-ddfls
	8702bfee57bb9       ba04bb24b9575       2 minutes ago        Running             storage-provisioner        0                   50ab4c2d4d0de       storage-provisioner
	631d60ffe727c       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                 0                   72a1079e82ac7       kube-proxy-fs7hg
	cc48c18dbdb99       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni                0                   e334ffd6cac06       kindnet-6r7md
	ab42205eeee75       05c284c929889       2 minutes ago        Running             kube-scheduler             0                   0ca40721db9e6       kube-scheduler-addons-916083
	03bb27f5cf55e       9961cbceaf234       2 minutes ago        Running             kube-controller-manager    0                   aa123d8449420       kube-controller-manager-addons-916083
	4b4fb5cb9a74f       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver             0                   33b8781915870       kube-apiserver-addons-916083
	1cc454d90602a       9cdd6470f48c8       2 minutes ago        Running             etcd                       0                   29d33a5b6d8da       etcd-addons-916083
	
	
	==> containerd <==
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.720542729Z" level=info msg="Stop container \"8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52\" with signal terminated"
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.844399535Z" level=info msg="RemoveContainer for \"e41990ab96256b0a14990b1c9c2b11dd0cb1b0253119923ac5b3b52d4c9cb0a0\""
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.863130957Z" level=info msg="RemoveContainer for \"e41990ab96256b0a14990b1c9c2b11dd0cb1b0253119923ac5b3b52d4c9cb0a0\" returns successfully"
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.874027805Z" level=info msg="RemoveContainer for \"4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f\""
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.880768935Z" level=info msg="RemoveContainer for \"4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f\" returns successfully"
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.887897477Z" level=error msg="ContainerStatus for \"4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f\": not found"
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.891867123Z" level=info msg="RemoveContainer for \"b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5\""
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.912654447Z" level=info msg="RemoveContainer for \"b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5\" returns successfully"
	Jan 15 14:04:53 addons-916083 containerd[741]: time="2024-01-15T14:04:53.913484340Z" level=error msg="ContainerStatus for \"b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5\": not found"
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.727451759Z" level=info msg="Kill container \"8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52\""
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.807465402Z" level=info msg="shim disconnected" id=8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.807730141Z" level=warning msg="cleaning up after shim disconnected" id=8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52 namespace=k8s.io
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.807822020Z" level=info msg="cleaning up dead shim"
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.818458895Z" level=warning msg="cleanup warnings time=\"2024-01-15T14:04:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10140 runtime=io.containerd.runc.v2\n"
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.821419109Z" level=info msg="StopContainer for \"8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52\" returns successfully"
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.822163187Z" level=info msg="StopPodSandbox for \"efb95591000dd5dfb0b07a06b1a50556cc2688a9fc65ffc8e0f14e7d30f80b8a\""
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.822241297Z" level=info msg="Container to stop \"8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.860421659Z" level=info msg="shim disconnected" id=efb95591000dd5dfb0b07a06b1a50556cc2688a9fc65ffc8e0f14e7d30f80b8a
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.860718241Z" level=warning msg="cleaning up after shim disconnected" id=efb95591000dd5dfb0b07a06b1a50556cc2688a9fc65ffc8e0f14e7d30f80b8a namespace=k8s.io
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.860753358Z" level=info msg="cleaning up dead shim"
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.872160805Z" level=warning msg="cleanup warnings time=\"2024-01-15T14:04:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10174 runtime=io.containerd.runc.v2\n"
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.930447409Z" level=info msg="TearDown network for sandbox \"efb95591000dd5dfb0b07a06b1a50556cc2688a9fc65ffc8e0f14e7d30f80b8a\" successfully"
	Jan 15 14:04:55 addons-916083 containerd[741]: time="2024-01-15T14:04:55.930510341Z" level=info msg="StopPodSandbox for \"efb95591000dd5dfb0b07a06b1a50556cc2688a9fc65ffc8e0f14e7d30f80b8a\" returns successfully"
	Jan 15 14:04:56 addons-916083 containerd[741]: time="2024-01-15T14:04:56.871354475Z" level=info msg="RemoveContainer for \"8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52\""
	Jan 15 14:04:56 addons-916083 containerd[741]: time="2024-01-15T14:04:56.876882215Z" level=info msg="RemoveContainer for \"8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52\" returns successfully"
	
	
	==> coredns [0509b1d6488f6866ef9630531c42875aed9eae5871a443cca13f897c6ca3cc30] <==
	[INFO] 10.244.0.19:53845 - 8820 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055251s
	[INFO] 10.244.0.19:53845 - 14787 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055095s
	[INFO] 10.244.0.19:53845 - 59218 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053652s
	[INFO] 10.244.0.19:53845 - 59513 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005549s
	[INFO] 10.244.0.19:53845 - 8722 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001013304s
	[INFO] 10.244.0.19:53845 - 1504 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00095057s
	[INFO] 10.244.0.19:53845 - 21962 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000083403s
	[INFO] 10.244.0.19:42424 - 7502 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105654s
	[INFO] 10.244.0.19:57422 - 56180 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000232297s
	[INFO] 10.244.0.19:42424 - 26269 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000074386s
	[INFO] 10.244.0.19:57422 - 2537 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00007825s
	[INFO] 10.244.0.19:42424 - 14864 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000083575s
	[INFO] 10.244.0.19:57422 - 4580 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097286s
	[INFO] 10.244.0.19:57422 - 6859 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060872s
	[INFO] 10.244.0.19:42424 - 9857 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089294s
	[INFO] 10.244.0.19:57422 - 33462 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046456s
	[INFO] 10.244.0.19:42424 - 59963 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069898s
	[INFO] 10.244.0.19:57422 - 27444 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097179s
	[INFO] 10.244.0.19:42424 - 51408 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066697s
	[INFO] 10.244.0.19:57422 - 42659 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0012632s
	[INFO] 10.244.0.19:42424 - 54964 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001593505s
	[INFO] 10.244.0.19:57422 - 62979 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000877743s
	[INFO] 10.244.0.19:42424 - 31305 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001187642s
	[INFO] 10.244.0.19:57422 - 44971 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006696s
	[INFO] 10.244.0.19:42424 - 18999 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034764s
	
	
	==> describe nodes <==
	Name:               addons-916083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-916083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e
	                    minikube.k8s.io/name=addons-916083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T14_02_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-916083
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 14:02:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-916083
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 14:05:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 14:04:49 +0000   Mon, 15 Jan 2024 14:02:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 14:04:49 +0000   Mon, 15 Jan 2024 14:02:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 14:04:49 +0000   Mon, 15 Jan 2024 14:02:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 14:04:49 +0000   Mon, 15 Jan 2024 14:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-916083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 004d54a504a441c4bf6d99550b0c9799
	  System UUID:                eb64688b-abbc-4ff4-af75-2a89f845e9c7
	  Boot ID:                    489f1f75-cead-4e0d-97ee-b5bdbf9f668e
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-c8qfb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  default                     hello-world-app-5d77478584-wqrqg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-kr5wf                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 coredns-5dd5756b68-nbgjt                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m32s
	  kube-system                 etcd-addons-916083                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m45s
	  kube-system                 kindnet-6r7md                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m32s
	  kube-system                 kube-apiserver-addons-916083               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 kube-controller-manager-addons-916083      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-proxy-fs7hg                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-scheduler-addons-916083               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 nvidia-device-plugin-daemonset-dj78p       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  local-path-storage          local-path-provisioner-78b46b4d5c-ddfls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-hqlhr             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m29s                  kube-proxy       
	  Normal  Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node addons-916083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node addons-916083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x7 over 2m52s)  kubelet          Node addons-916083 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m45s                  kubelet          Node addons-916083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s                  kubelet          Node addons-916083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m45s                  kubelet          Node addons-916083 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m45s                  kubelet          Node addons-916083 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m35s                  kubelet          Node addons-916083 status is now: NodeReady
	  Normal  RegisteredNode           2m32s                  node-controller  Node addons-916083 event: Registered Node addons-916083 in Controller
	
	
	==> dmesg <==
	[  +0.000805] FS-Cache: N-cookie c=000000c0 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=000000006c2f7aa3
	[  +0.001155] FS-Cache: N-key=[8] '51e2c90000000000'
	[  +0.002800] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=000000ba [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001068] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=00000000acbde6cc
	[  +0.001215] FS-Cache: O-key=[8] '51e2c90000000000'
	[  +0.000760] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.001003] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=00000000f4035d4d
	[  +0.001087] FS-Cache: N-key=[8] '51e2c90000000000'
	[  +2.762848] FS-Cache: Duplicate cookie detected
	[  +0.000831] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001117] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=000000002f94bec5
	[  +0.001162] FS-Cache: O-key=[8] '50e2c90000000000'
	[  +0.000725] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=0000000075bebf78
	[  +0.001135] FS-Cache: N-key=[8] '50e2c90000000000'
	[  +0.389294] FS-Cache: Duplicate cookie detected
	[  +0.000778] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000969] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=00000000a105c0ad
	[  +0.001207] FS-Cache: O-key=[8] '56e2c90000000000'
	[  +0.000807] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000991] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=000000006c2f7aa3
	[  +0.001031] FS-Cache: N-key=[8] '56e2c90000000000'
	[Jan15 13:23] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [1cc454d90602ac16e878f37a2d7ebae4f134e48bae58cae67ef416f119481c87] <==
	{"level":"info","ts":"2024-01-15T14:02:09.835495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-15T14:02:09.835576Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-15T14:02:09.8371Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-15T14:02:09.837268Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-15T14:02:09.837288Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-15T14:02:09.837826Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-15T14:02:09.837852Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-15T14:02:10.823293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-15T14:02:10.823508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-15T14:02:10.823619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-15T14:02:10.823735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-15T14:02:10.823817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-15T14:02:10.823893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-15T14:02:10.82397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-15T14:02:10.82743Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-916083 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T14:02:10.827615Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T14:02:10.8288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T14:02:10.829087Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T14:02:10.82944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T14:02:10.863327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T14:02:10.86354Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-15T14:02:10.864822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-15T14:02:10.878266Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T14:02:10.883305Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T14:02:10.883511Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [cd8ec42b386f28ca1f8c83eafc5bb75daf9c684154eb941f88b132116d20e226] <==
	2024/01/15 14:03:49 GCP Auth Webhook started!
	2024/01/15 14:04:02 Ready to marshal response ...
	2024/01/15 14:04:02 Ready to write response ...
	2024/01/15 14:04:12 Ready to marshal response ...
	2024/01/15 14:04:12 Ready to write response ...
	2024/01/15 14:04:25 Ready to marshal response ...
	2024/01/15 14:04:25 Ready to write response ...
	2024/01/15 14:04:34 Ready to marshal response ...
	2024/01/15 14:04:34 Ready to write response ...
	2024/01/15 14:04:36 Ready to marshal response ...
	2024/01/15 14:04:36 Ready to write response ...
	
	
	==> kernel <==
	 14:05:01 up 18:47,  0 users,  load average: 2.61, 2.20, 2.57
	Linux addons-916083 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [cc48c18dbdb992baa03f2db0baf011ff6f64981f40d1f120d8d450e3513ae2d5] <==
	I0115 14:03:01.450607       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0115 14:03:01.467793       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:01.467831       1 main.go:227] handling current node
	I0115 14:03:11.479886       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:11.479914       1 main.go:227] handling current node
	I0115 14:03:21.491145       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:21.491171       1 main.go:227] handling current node
	I0115 14:03:31.503599       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:31.503628       1 main.go:227] handling current node
	I0115 14:03:41.507886       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:41.507914       1 main.go:227] handling current node
	I0115 14:03:51.519401       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:51.519430       1 main.go:227] handling current node
	I0115 14:04:01.530196       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:01.530226       1 main.go:227] handling current node
	I0115 14:04:11.539468       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:11.539524       1 main.go:227] handling current node
	I0115 14:04:21.544000       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:21.544033       1 main.go:227] handling current node
	I0115 14:04:31.555402       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:31.555610       1 main.go:227] handling current node
	I0115 14:04:41.560425       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:41.560453       1 main.go:227] handling current node
	I0115 14:04:51.566620       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:51.566648       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4b4fb5cb9a74fdfee7be722e3f253218e863ea1d5dc44b9177095caed4a158e2] <==
	I0115 14:04:22.708872       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0115 14:04:23.632114       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I0115 14:04:24.807592       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0115 14:04:25.327164       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.200.115"}
	E0115 14:04:33.632597       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	I0115 14:04:35.096410       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.50.60"}
	E0115 14:04:43.633049       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	I0115 14:04:52.779983       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.780030       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.803381       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.803428       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.816094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.816145       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.861543       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.861586       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.867731       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.867783       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.884977       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.885036       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.906347       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.907455       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0115 14:04:53.633613       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	W0115 14:04:53.862117       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0115 14:04:53.908583       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0115 14:04:53.914185       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [03bb27f5cf55e47253b31ebd97175a5e43bb2f02bef5293212d5db8853b9a511] <==
	I0115 14:04:46.013077       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I0115 14:04:52.662321       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0115 14:04:52.668720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="9.591µs"
	I0115 14:04:52.683808       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0115 14:04:52.968888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.457µs"
	I0115 14:04:53.848373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.952µs"
	E0115 14:04:53.864409       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:53.910944       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:53.916650       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:04:54.820692       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:54.820724       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:04:55.293384       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:55.293419       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:04:55.459602       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:55.459637       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:04:56.673195       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:56.673226       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:04:57.244904       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:57.244938       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:04:57.713800       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:57.713832       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 14:04:59.312089       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0115 14:04:59.312133       1 shared_informer.go:318] Caches are synced for resource quota
	I0115 14:04:59.660651       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0115 14:04:59.660694       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [631d60ffe727cb63aaea4212d8ec338271a8310d314d3a5e7c2720cb7a1c338f] <==
	I0115 14:02:31.586464       1 server_others.go:69] "Using iptables proxy"
	I0115 14:02:31.603709       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0115 14:02:31.686387       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0115 14:02:31.688604       1 server_others.go:152] "Using iptables Proxier"
	I0115 14:02:31.688645       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0115 14:02:31.688654       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0115 14:02:31.688713       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 14:02:31.688963       1 server.go:846] "Version info" version="v1.28.4"
	I0115 14:02:31.688980       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 14:02:31.689977       1 config.go:188] "Starting service config controller"
	I0115 14:02:31.699582       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 14:02:31.690700       1 config.go:97] "Starting endpoint slice config controller"
	I0115 14:02:31.691309       1 config.go:315] "Starting node config controller"
	I0115 14:02:31.700969       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 14:02:31.702224       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 14:02:31.702246       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 14:02:31.702262       1 shared_informer.go:318] Caches are synced for service config
	I0115 14:02:31.802186       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ab42205eeee750d8578778a065fec6c53560a1398c6b6ae117de30bae5ea2d90] <==
	W0115 14:02:13.970152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0115 14:02:13.970169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0115 14:02:13.970236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 14:02:13.970250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 14:02:13.970330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 14:02:13.970355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0115 14:02:13.970397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 14:02:13.970413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 14:02:13.970463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 14:02:13.970478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 14:02:13.970529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 14:02:13.970543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0115 14:02:13.970594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 14:02:13.970613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 14:02:13.970666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 14:02:13.970681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0115 14:02:13.970716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 14:02:13.970760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0115 14:02:13.970810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 14:02:13.970838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 14:02:13.970978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 14:02:13.971003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 14:02:13.971034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 14:02:13.971050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0115 14:02:15.058493       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.486308    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7ggdv\" (UniqueName: \"kubernetes.io/projected/c542b9d8-bd4a-48a2-8471-8e1b6a2b2cf8-kube-api-access-7ggdv\") on node \"addons-916083\" DevicePath \"\""
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.486355    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j4j62\" (UniqueName: \"kubernetes.io/projected/dd38a8e6-1095-44b6-a257-7322dd8369e7-kube-api-access-j4j62\") on node \"addons-916083\" DevicePath \"\""
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.833939    1339 scope.go:117] "RemoveContainer" containerID="e41990ab96256b0a14990b1c9c2b11dd0cb1b0253119923ac5b3b52d4c9cb0a0"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.834266    1339 scope.go:117] "RemoveContainer" containerID="e72a9f7597941d993bcb596123b76544bd3d702c34e0d0851166141425a07530"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: E0115 14:04:53.834530    1339 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-wqrqg_default(01a98167-d436-4fe1-9867-19fc59760f99)\"" pod="default/hello-world-app-5d77478584-wqrqg" podUID="01a98167-d436-4fe1-9867-19fc59760f99"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.865333    1339 scope.go:117] "RemoveContainer" containerID="4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.887584    1339 scope.go:117] "RemoveContainer" containerID="4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: E0115 14:04:53.888158    1339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f\": not found" containerID="4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.888218    1339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f"} err="failed to get container status \"4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4985a31764a2086f5a1d5d423a3e6438382458fa18bc41cf4554638ccee3e70f\": not found"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.888232    1339 scope.go:117] "RemoveContainer" containerID="b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.913007    1339 scope.go:117] "RemoveContainer" containerID="b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: E0115 14:04:53.913802    1339 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5\": not found" containerID="b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5"
	Jan 15 14:04:53 addons-916083 kubelet[1339]: I0115 14:04:53.913851    1339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5"} err="failed to get container status \"b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b635d22dbe0a2559fc3107d271d32a176c604d13a0ab9041fd13c321133fbfc5\": not found"
	Jan 15 14:04:54 addons-916083 kubelet[1339]: I0115 14:04:54.514298    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c542b9d8-bd4a-48a2-8471-8e1b6a2b2cf8" path="/var/lib/kubelet/pods/c542b9d8-bd4a-48a2-8471-8e1b6a2b2cf8/volumes"
	Jan 15 14:04:54 addons-916083 kubelet[1339]: I0115 14:04:54.515295    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c7e289dd-e01b-4589-85ba-3fbcbdea985a" path="/var/lib/kubelet/pods/c7e289dd-e01b-4589-85ba-3fbcbdea985a/volumes"
	Jan 15 14:04:54 addons-916083 kubelet[1339]: I0115 14:04:54.515727    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d571e8ef-cf52-4f14-b47a-bcabe545549a" path="/var/lib/kubelet/pods/d571e8ef-cf52-4f14-b47a-bcabe545549a/volumes"
	Jan 15 14:04:54 addons-916083 kubelet[1339]: I0115 14:04:54.516096    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dd38a8e6-1095-44b6-a257-7322dd8369e7" path="/var/lib/kubelet/pods/dd38a8e6-1095-44b6-a257-7322dd8369e7/volumes"
	Jan 15 14:04:56 addons-916083 kubelet[1339]: I0115 14:04:56.100048    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxccn\" (UniqueName: \"kubernetes.io/projected/b39b2e80-4dcd-401c-8522-c031de889453-kube-api-access-qxccn\") pod \"b39b2e80-4dcd-401c-8522-c031de889453\" (UID: \"b39b2e80-4dcd-401c-8522-c031de889453\") "
	Jan 15 14:04:56 addons-916083 kubelet[1339]: I0115 14:04:56.100121    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b39b2e80-4dcd-401c-8522-c031de889453-webhook-cert\") pod \"b39b2e80-4dcd-401c-8522-c031de889453\" (UID: \"b39b2e80-4dcd-401c-8522-c031de889453\") "
	Jan 15 14:04:56 addons-916083 kubelet[1339]: I0115 14:04:56.103311    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39b2e80-4dcd-401c-8522-c031de889453-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b39b2e80-4dcd-401c-8522-c031de889453" (UID: "b39b2e80-4dcd-401c-8522-c031de889453"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 14:04:56 addons-916083 kubelet[1339]: I0115 14:04:56.106041    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39b2e80-4dcd-401c-8522-c031de889453-kube-api-access-qxccn" (OuterVolumeSpecName: "kube-api-access-qxccn") pod "b39b2e80-4dcd-401c-8522-c031de889453" (UID: "b39b2e80-4dcd-401c-8522-c031de889453"). InnerVolumeSpecName "kube-api-access-qxccn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 14:04:56 addons-916083 kubelet[1339]: I0115 14:04:56.200557    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qxccn\" (UniqueName: \"kubernetes.io/projected/b39b2e80-4dcd-401c-8522-c031de889453-kube-api-access-qxccn\") on node \"addons-916083\" DevicePath \"\""
	Jan 15 14:04:56 addons-916083 kubelet[1339]: I0115 14:04:56.200596    1339 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b39b2e80-4dcd-401c-8522-c031de889453-webhook-cert\") on node \"addons-916083\" DevicePath \"\""
	Jan 15 14:04:56 addons-916083 kubelet[1339]: I0115 14:04:56.514267    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b39b2e80-4dcd-401c-8522-c031de889453" path="/var/lib/kubelet/pods/b39b2e80-4dcd-401c-8522-c031de889453/volumes"
	Jan 15 14:04:56 addons-916083 kubelet[1339]: I0115 14:04:56.869389    1339 scope.go:117] "RemoveContainer" containerID="8f4ea6e34f87c88ca96ff3bc7c9e1e43c567d679375f36665147d62ff9da9f52"
	
	
	==> storage-provisioner [8702bfee57bb9e8e04569ec57888559b3ea0d29b0a2af00f5b96c1b8921d474a] <==
	I0115 14:02:35.760716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 14:02:35.850992       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 14:02:35.851072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 14:02:35.886709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 14:02:35.886891       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-916083_afbf9aca-6a2f-4286-b109-b9e57a45b1e6!
	I0115 14:02:35.897886       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6fd94479-8d93-428e-bb75-f1c93fc214d4", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-916083_afbf9aca-6a2f-4286-b109-b9e57a45b1e6 became leader
	I0115 14:02:35.987030       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-916083_afbf9aca-6a2f-4286-b109-b9e57a45b1e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-916083 -n addons-916083
helpers_test.go:261: (dbg) Run:  kubectl --context addons-916083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (9.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-c8qfb" [c04eb780-ec07-47e2-81b0-727148f7a7ee] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003421984s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-916083
addons_test.go:860: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable cloud-spanner -p addons-916083: exit status 11 (615.350066ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-15T14:05:11Z" level=error msg="stat /run/containerd/runc/k8s.io/e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:861: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 addons disable cloud-spanner -p addons-916083" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-916083
helpers_test.go:235: (dbg) docker inspect addons-916083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f",
	        "Created": "2024-01-15T14:01:51.89005836Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4002632,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T14:01:52.227859779Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f/hostname",
	        "HostsPath": "/var/lib/docker/containers/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f/hosts",
	        "LogPath": "/var/lib/docker/containers/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f/74cf3f25b39a7dc5b0512eab07912ff953e0b1906ea86ac8914f7dea7302503f-json.log",
	        "Name": "/addons-916083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-916083:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-916083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf9d963cc8d65c88e7f016d0d91e93db2454a4a480880e388b87046f7a5fabdd-init/diff:/var/lib/docker/overlay2/37735672df261a15b7a2ba1989e6f3a0906a58ecd248d26a2bc61e23d88a15c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf9d963cc8d65c88e7f016d0d91e93db2454a4a480880e388b87046f7a5fabdd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf9d963cc8d65c88e7f016d0d91e93db2454a4a480880e388b87046f7a5fabdd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf9d963cc8d65c88e7f016d0d91e93db2454a4a480880e388b87046f7a5fabdd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-916083",
	                "Source": "/var/lib/docker/volumes/addons-916083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-916083",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-916083",
	                "name.minikube.sigs.k8s.io": "addons-916083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b386c2dad80c227c1a8f98d67fc82d80a4b8b592f3166fa0a1f0e4072d0c5a6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36439"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36438"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36435"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36436"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b386c2dad80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-916083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74cf3f25b39a",
	                        "addons-916083"
	                    ],
	                    "NetworkID": "df7f910ab822e8bb791b6bacf9aafc3fb36a7a28df4815084863cbae77a7a61b",
	                    "EndpointID": "1bcc56304f7d3df50b2a337c191736660012f7230dca2d9051d1af9ab4ac67e6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-916083 -n addons-916083
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-916083 logs -n 25: (2.099597045s)
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-168263   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | -p download-only-168263              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-168263              | download-only-168263   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-450455              | download-only-450455   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-851187              | download-only-851187   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-168263              | download-only-168263   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| start   | --download-only -p                   | download-docker-152127 | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | download-docker-152127               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-152127            | download-docker-152127 | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| start   | --download-only -p                   | binary-mirror-093958   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | binary-mirror-093958                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41435               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-093958              | binary-mirror-093958   | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| addons  | enable dashboard -p                  | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | addons-916083                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | addons-916083                        |                        |         |         |                     |                     |
	| start   | -p addons-916083 --wait=true         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:03 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-916083 ip                     | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	| addons  | addons-916083 addons disable         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-916083 addons                 | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | addons-916083                        |                        |         |         |                     |                     |
	| ssh     | addons-916083 ssh curl -s            | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-916083 ip                     | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	| addons  | addons-916083 addons                 | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-916083 addons disable         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-916083 addons disable         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-916083 addons                 | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:04 UTC | 15 Jan 24 14:04 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:05 UTC | 15 Jan 24 14:05 UTC |
	|         | -p addons-916083                     |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-916083          | jenkins | v1.32.0 | 15 Jan 24 14:05 UTC |                     |
	|         | addons-916083                        |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 14:01:28
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 14:01:28.472523 4002183 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:01:28.472713 4002183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:01:28.472738 4002183 out.go:309] Setting ErrFile to fd 2...
	I0115 14:01:28.472757 4002183 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:01:28.473017 4002183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:01:28.473532 4002183 out.go:303] Setting JSON to false
	I0115 14:01:28.474413 4002183 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":67432,"bootTime":1705259857,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:01:28.474516 4002183 start.go:138] virtualization:  
	I0115 14:01:28.477100 4002183 out.go:177] * [addons-916083] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 14:01:28.479340 4002183 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 14:01:28.481223 4002183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:01:28.479493 4002183 notify.go:220] Checking for updates...
	I0115 14:01:28.483312 4002183 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:01:28.485274 4002183 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:01:28.487226 4002183 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 14:01:28.489055 4002183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 14:01:28.491453 4002183 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:01:28.515408 4002183 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:01:28.515557 4002183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:01:28.594659 4002183 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 14:01:28.584754318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:01:28.594766 4002183 docker.go:295] overlay module found
	I0115 14:01:28.596894 4002183 out.go:177] * Using the docker driver based on user configuration
	I0115 14:01:28.598594 4002183 start.go:298] selected driver: docker
	I0115 14:01:28.598623 4002183 start.go:902] validating driver "docker" against <nil>
	I0115 14:01:28.598637 4002183 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 14:01:28.599304 4002183 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:01:28.671259 4002183 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 14:01:28.661566705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:01:28.671438 4002183 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 14:01:28.671696 4002183 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 14:01:28.673744 4002183 out.go:177] * Using Docker driver with root privileges
	I0115 14:01:28.675791 4002183 cni.go:84] Creating CNI manager for ""
	I0115 14:01:28.675854 4002183 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:01:28.675871 4002183 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 14:01:28.675886 4002183 start_flags.go:321] config:
	{Name:addons-916083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-916083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:01:28.678493 4002183 out.go:177] * Starting control plane node addons-916083 in cluster addons-916083
	I0115 14:01:28.680553 4002183 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0115 14:01:28.682736 4002183 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 14:01:28.684812 4002183 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 14:01:28.684872 4002183 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0115 14:01:28.684885 4002183 cache.go:56] Caching tarball of preloaded images
	I0115 14:01:28.684914 4002183 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 14:01:28.684972 4002183 preload.go:174] Found /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0115 14:01:28.684982 4002183 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0115 14:01:28.685352 4002183 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/config.json ...
	I0115 14:01:28.685379 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/config.json: {Name:mk92c7fbdca34bd5c56edbab295eadcbe0b00279 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:28.702207 4002183 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 14:01:28.702321 4002183 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 14:01:28.702340 4002183 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 14:01:28.702344 4002183 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 14:01:28.702355 4002183 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 14:01:28.702361 4002183 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0115 14:01:44.394325 4002183 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0115 14:01:44.394365 4002183 cache.go:194] Successfully downloaded all kic artifacts
	I0115 14:01:44.394443 4002183 start.go:365] acquiring machines lock for addons-916083: {Name:mk4ca45dcb3f98d8bf4134cef8afee4f8ad9a7b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 14:01:44.394567 4002183 start.go:369] acquired machines lock for "addons-916083" in 101.454µs
	I0115 14:01:44.394597 4002183 start.go:93] Provisioning new machine with config: &{Name:addons-916083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-916083 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 14:01:44.394681 4002183 start.go:125] createHost starting for "" (driver="docker")
	I0115 14:01:44.397178 4002183 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0115 14:01:44.397424 4002183 start.go:159] libmachine.API.Create for "addons-916083" (driver="docker")
	I0115 14:01:44.397454 4002183 client.go:168] LocalClient.Create starting
	I0115 14:01:44.397582 4002183 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem
	I0115 14:01:44.600775 4002183 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem
	I0115 14:01:45.678440 4002183 cli_runner.go:164] Run: docker network inspect addons-916083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 14:01:45.695291 4002183 cli_runner.go:211] docker network inspect addons-916083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 14:01:45.695386 4002183 network_create.go:281] running [docker network inspect addons-916083] to gather additional debugging logs...
	I0115 14:01:45.695410 4002183 cli_runner.go:164] Run: docker network inspect addons-916083
	W0115 14:01:45.711857 4002183 cli_runner.go:211] docker network inspect addons-916083 returned with exit code 1
	I0115 14:01:45.711891 4002183 network_create.go:284] error running [docker network inspect addons-916083]: docker network inspect addons-916083: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-916083 not found
	I0115 14:01:45.711916 4002183 network_create.go:286] output of [docker network inspect addons-916083]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-916083 not found
	
	** /stderr **
	I0115 14:01:45.712032 4002183 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 14:01:45.729595 4002183 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400258b090}
	I0115 14:01:45.729632 4002183 network_create.go:124] attempt to create docker network addons-916083 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0115 14:01:45.729691 4002183 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-916083 addons-916083
	I0115 14:01:45.802119 4002183 network_create.go:108] docker network addons-916083 192.168.49.0/24 created
	I0115 14:01:45.802157 4002183 kic.go:121] calculated static IP "192.168.49.2" for the "addons-916083" container
	I0115 14:01:45.802232 4002183 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 14:01:45.818622 4002183 cli_runner.go:164] Run: docker volume create addons-916083 --label name.minikube.sigs.k8s.io=addons-916083 --label created_by.minikube.sigs.k8s.io=true
	I0115 14:01:45.837648 4002183 oci.go:103] Successfully created a docker volume addons-916083
	I0115 14:01:45.837751 4002183 cli_runner.go:164] Run: docker run --rm --name addons-916083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916083 --entrypoint /usr/bin/test -v addons-916083:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 14:01:47.640508 4002183 cli_runner.go:217] Completed: docker run --rm --name addons-916083-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916083 --entrypoint /usr/bin/test -v addons-916083:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.802715553s)
	I0115 14:01:47.640539 4002183 oci.go:107] Successfully prepared a docker volume addons-916083
	I0115 14:01:47.640566 4002183 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 14:01:47.640585 4002183 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 14:01:47.640674 4002183 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-916083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 14:01:51.805673 4002183 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-916083:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.164943924s)
	I0115 14:01:51.805717 4002183 kic.go:203] duration metric: took 4.165129 seconds to extract preloaded images to volume
	W0115 14:01:51.805857 4002183 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 14:01:51.805974 4002183 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 14:01:51.873950 4002183 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-916083 --name addons-916083 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-916083 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-916083 --network addons-916083 --ip 192.168.49.2 --volume addons-916083:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 14:01:52.236227 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Running}}
	I0115 14:01:52.255087 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:01:52.275087 4002183 cli_runner.go:164] Run: docker exec addons-916083 stat /var/lib/dpkg/alternatives/iptables
	I0115 14:01:52.345975 4002183 oci.go:144] the created container "addons-916083" has a running status.
	I0115 14:01:52.346008 4002183 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa...
	I0115 14:01:52.848644 4002183 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 14:01:52.893489 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:01:52.933978 4002183 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 14:01:52.934003 4002183 kic_runner.go:114] Args: [docker exec --privileged addons-916083 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 14:01:53.012652 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:01:53.050562 4002183 machine.go:88] provisioning docker machine ...
	I0115 14:01:53.050592 4002183 ubuntu.go:169] provisioning hostname "addons-916083"
	I0115 14:01:53.050662 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:53.083599 4002183 main.go:141] libmachine: Using SSH client type: native
	I0115 14:01:53.084113 4002183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 36439 <nil> <nil>}
	I0115 14:01:53.084132 4002183 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-916083 && echo "addons-916083" | sudo tee /etc/hostname
	I0115 14:01:53.286918 4002183 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-916083
	
	I0115 14:01:53.287100 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:53.310525 4002183 main.go:141] libmachine: Using SSH client type: native
	I0115 14:01:53.310939 4002183 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 36439 <nil> <nil>}
	I0115 14:01:53.310957 4002183 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-916083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-916083/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-916083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 14:01:53.464605 4002183 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 14:01:53.464639 4002183 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17957-3996034/.minikube CaCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17957-3996034/.minikube}
	I0115 14:01:53.464678 4002183 ubuntu.go:177] setting up certificates
	I0115 14:01:53.464687 4002183 provision.go:83] configureAuth start
	I0115 14:01:53.464750 4002183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916083
	I0115 14:01:53.486711 4002183 provision.go:138] copyHostCerts
	I0115 14:01:53.486802 4002183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.pem (1082 bytes)
	I0115 14:01:53.486961 4002183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/cert.pem (1123 bytes)
	I0115 14:01:53.487029 4002183 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/key.pem (1679 bytes)
	I0115 14:01:53.487078 4002183 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem org=jenkins.addons-916083 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-916083]
	I0115 14:01:53.760942 4002183 provision.go:172] copyRemoteCerts
	I0115 14:01:53.761033 4002183 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 14:01:53.761082 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:53.780076 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:53.878254 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 14:01:53.906918 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0115 14:01:53.935967 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0115 14:01:53.964554 4002183 provision.go:86] duration metric: configureAuth took 499.852546ms
	I0115 14:01:53.964588 4002183 ubuntu.go:193] setting minikube options for container-runtime
	I0115 14:01:53.964792 4002183 config.go:182] Loaded profile config "addons-916083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:01:53.964804 4002183 machine.go:91] provisioned docker machine in 914.225268ms
	I0115 14:01:53.964811 4002183 client.go:171] LocalClient.Create took 9.567349931s
	I0115 14:01:53.964823 4002183 start.go:167] duration metric: libmachine.API.Create for "addons-916083" took 9.567401852s
	I0115 14:01:53.964835 4002183 start.go:300] post-start starting for "addons-916083" (driver="docker")
	I0115 14:01:53.964850 4002183 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 14:01:53.964909 4002183 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 14:01:53.964986 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:53.982347 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:54.082281 4002183 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 14:01:54.086494 4002183 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 14:01:54.086532 4002183 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 14:01:54.086544 4002183 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 14:01:54.086552 4002183 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 14:01:54.086563 4002183 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-3996034/.minikube/addons for local assets ...
	I0115 14:01:54.086637 4002183 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-3996034/.minikube/files for local assets ...
	I0115 14:01:54.086663 4002183 start.go:303] post-start completed in 121.819581ms
	I0115 14:01:54.087006 4002183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916083
	I0115 14:01:54.104494 4002183 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/config.json ...
	I0115 14:01:54.104783 4002183 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 14:01:54.104832 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:54.122498 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:54.217303 4002183 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 14:01:54.222931 4002183 start.go:128] duration metric: createHost completed in 9.828233949s
	I0115 14:01:54.222963 4002183 start.go:83] releasing machines lock for "addons-916083", held for 9.828382958s
	I0115 14:01:54.223040 4002183 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-916083
	I0115 14:01:54.240410 4002183 ssh_runner.go:195] Run: cat /version.json
	I0115 14:01:54.240431 4002183 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 14:01:54.240467 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:54.240497 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:01:54.259280 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:54.261640 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:01:54.356002 4002183 ssh_runner.go:195] Run: systemctl --version
	I0115 14:01:54.493994 4002183 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 14:01:54.499713 4002183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0115 14:01:54.530037 4002183 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0115 14:01:54.530119 4002183 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 14:01:54.564612 4002183 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 14:01:54.564645 4002183 start.go:475] detecting cgroup driver to use...
	I0115 14:01:54.564679 4002183 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 14:01:54.564742 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 14:01:54.578868 4002183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 14:01:54.592358 4002183 docker.go:217] disabling cri-docker service (if available) ...
	I0115 14:01:54.592475 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 14:01:54.608234 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 14:01:54.624061 4002183 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 14:01:54.726895 4002183 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 14:01:54.835581 4002183 docker.go:233] disabling docker service ...
	I0115 14:01:54.835648 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 14:01:54.856768 4002183 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 14:01:54.872078 4002183 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 14:01:54.970760 4002183 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 14:01:55.073684 4002183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 14:01:55.088031 4002183 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 14:01:55.108336 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0115 14:01:55.120357 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 14:01:55.132898 4002183 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 14:01:55.132965 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 14:01:55.146020 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 14:01:55.158173 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 14:01:55.170328 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 14:01:55.182378 4002183 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 14:01:55.193366 4002183 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 14:01:55.205089 4002183 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 14:01:55.215464 4002183 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 14:01:55.225903 4002183 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 14:01:55.326835 4002183 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 14:01:55.473674 4002183 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 14:01:55.473756 4002183 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 14:01:55.478419 4002183 start.go:543] Will wait 60s for crictl version
	I0115 14:01:55.478485 4002183 ssh_runner.go:195] Run: which crictl
	I0115 14:01:55.482810 4002183 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 14:01:55.526483 4002183 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0115 14:01:55.526569 4002183 ssh_runner.go:195] Run: containerd --version
	I0115 14:01:55.558666 4002183 ssh_runner.go:195] Run: containerd --version
	I0115 14:01:55.594261 4002183 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.26 ...
	I0115 14:01:55.596245 4002183 cli_runner.go:164] Run: docker network inspect addons-916083 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 14:01:55.613131 4002183 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0115 14:01:55.617698 4002183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 14:01:55.631329 4002183 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 14:01:55.631423 4002183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 14:01:55.674730 4002183 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 14:01:55.674757 4002183 containerd.go:519] Images already preloaded, skipping extraction
	I0115 14:01:55.674815 4002183 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 14:01:55.714649 4002183 containerd.go:612] all images are preloaded for containerd runtime.
	I0115 14:01:55.714674 4002183 cache_images.go:84] Images are preloaded, skipping loading
	I0115 14:01:55.714741 4002183 ssh_runner.go:195] Run: sudo crictl info
	I0115 14:01:55.755731 4002183 cni.go:84] Creating CNI manager for ""
	I0115 14:01:55.755757 4002183 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:01:55.755786 4002183 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 14:01:55.755804 4002183 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-916083 NodeName:addons-916083 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0115 14:01:55.755935 4002183 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-916083"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 14:01:55.756003 4002183 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-916083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-916083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 14:01:55.756069 4002183 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0115 14:01:55.766669 4002183 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 14:01:55.766775 4002183 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 14:01:55.777069 4002183 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0115 14:01:55.798058 4002183 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0115 14:01:55.818887 4002183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0115 14:01:55.839527 4002183 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 14:01:55.843794 4002183 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 14:01:55.856774 4002183 certs.go:56] Setting up /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083 for IP: 192.168.49.2
	I0115 14:01:55.856808 4002183 certs.go:190] acquiring lock for shared ca certs: {Name:mk9e910b1d22df90feaffa3b68f77c94f902dcfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:55.856937 4002183 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key
	I0115 14:01:56.365558 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt ...
	I0115 14:01:56.365589 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt: {Name:mk9316865b0b0941ddfd00975a3bc8e7a0880170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.365795 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key ...
	I0115 14:01:56.365809 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key: {Name:mk154151ca5d9b8cca9e9c2d0311b4724132fce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.365895 4002183 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key
	I0115 14:01:56.598002 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.crt ...
	I0115 14:01:56.598030 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.crt: {Name:mk840a90585cdf3c26c2e019ac23ab831ac23f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.598205 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key ...
	I0115 14:01:56.598216 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key: {Name:mkff8a5cf2f609e63496d40510f33a3131dec2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.598333 4002183 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.key
	I0115 14:01:56.598348 4002183 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt with IP's: []
	I0115 14:01:56.801804 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt ...
	I0115 14:01:56.801835 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: {Name:mk81eda9512287f09041d3cbe740f7cff0d6ddc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.802017 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.key ...
	I0115 14:01:56.802029 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.key: {Name:mkc9faf286c154cb994c3becb8a3ed3476eae285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:56.802648 4002183 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key.dd3b5fb2
	I0115 14:01:56.802672 4002183 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 14:01:57.038559 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt.dd3b5fb2 ...
	I0115 14:01:57.038598 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt.dd3b5fb2: {Name:mkf606d0b807afb756347bd3c22025099a5ff12c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:57.038809 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key.dd3b5fb2 ...
	I0115 14:01:57.038825 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key.dd3b5fb2: {Name:mk62dbb3ce4d8e00be64a3f5a490d58f50d25a5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:57.039457 4002183 certs.go:337] copying /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt
	I0115 14:01:57.039546 4002183 certs.go:341] copying /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key
	I0115 14:01:57.039598 4002183 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.key
	I0115 14:01:57.039620 4002183 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.crt with IP's: []
	I0115 14:01:57.676232 4002183 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.crt ...
	I0115 14:01:57.676263 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.crt: {Name:mk5d6f5b33710a8dc7ecc907fa9718898af26a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:57.676451 4002183 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.key ...
	I0115 14:01:57.676465 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.key: {Name:mka07b3b18353f4266ea68729926c05d85e25671 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:01:57.676660 4002183 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem (1675 bytes)
	I0115 14:01:57.676708 4002183 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem (1082 bytes)
	I0115 14:01:57.676742 4002183 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem (1123 bytes)
	I0115 14:01:57.676771 4002183 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem (1679 bytes)
	I0115 14:01:57.677415 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 14:01:57.705801 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0115 14:01:57.734680 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 14:01:57.763596 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 14:01:57.792049 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 14:01:57.822918 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0115 14:01:57.851756 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 14:01:57.880260 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 14:01:57.908723 4002183 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 14:01:57.939016 4002183 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 14:01:57.961996 4002183 ssh_runner.go:195] Run: openssl version
	I0115 14:01:57.969405 4002183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 14:01:57.981095 4002183 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:01:57.985968 4002183 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 14:01 /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:01:57.986083 4002183 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:01:57.994558 4002183 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 14:01:58.006782 4002183 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 14:01:58.011446 4002183 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 14:01:58.011512 4002183 kubeadm.go:404] StartCluster: {Name:addons-916083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-916083 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:01:58.011598 4002183 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 14:01:58.011689 4002183 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 14:01:58.054429 4002183 cri.go:89] found id: ""
	I0115 14:01:58.054510 4002183 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 14:01:58.065307 4002183 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 14:01:58.076395 4002183 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 14:01:58.076484 4002183 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 14:01:58.087298 4002183 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 14:01:58.087374 4002183 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 14:01:58.149228 4002183 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0115 14:01:58.149608 4002183 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 14:01:58.197654 4002183 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 14:01:58.197728 4002183 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0115 14:01:58.197770 4002183 kubeadm.go:322] OS: Linux
	I0115 14:01:58.197821 4002183 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 14:01:58.197871 4002183 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 14:01:58.197919 4002183 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 14:01:58.197968 4002183 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 14:01:58.198017 4002183 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 14:01:58.198066 4002183 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 14:01:58.198112 4002183 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0115 14:01:58.198160 4002183 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0115 14:01:58.198207 4002183 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0115 14:01:58.286461 4002183 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 14:01:58.286629 4002183 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 14:01:58.286763 4002183 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 14:01:58.532870 4002183 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 14:01:58.535222 4002183 out.go:204]   - Generating certificates and keys ...
	I0115 14:01:58.535436 4002183 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 14:01:58.535547 4002183 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 14:01:58.914220 4002183 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 14:01:59.518539 4002183 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 14:02:00.244843 4002183 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 14:02:01.709142 4002183 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 14:02:02.064867 4002183 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 14:02:02.065005 4002183 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-916083 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 14:02:02.403222 4002183 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 14:02:02.403366 4002183 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-916083 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 14:02:02.866282 4002183 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 14:02:04.731843 4002183 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 14:02:05.086262 4002183 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 14:02:05.086556 4002183 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 14:02:06.248847 4002183 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 14:02:06.641936 4002183 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 14:02:07.211748 4002183 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 14:02:07.385460 4002183 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 14:02:07.386082 4002183 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 14:02:07.388801 4002183 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 14:02:07.391413 4002183 out.go:204]   - Booting up control plane ...
	I0115 14:02:07.391516 4002183 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 14:02:07.391590 4002183 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 14:02:07.392914 4002183 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 14:02:07.407921 4002183 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 14:02:07.409519 4002183 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 14:02:07.409952 4002183 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 14:02:07.515312 4002183 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 14:02:15.018215 4002183 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502932 seconds
	I0115 14:02:15.018344 4002183 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 14:02:15.034313 4002183 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 14:02:15.560340 4002183 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 14:02:15.560530 4002183 kubeadm.go:322] [mark-control-plane] Marking the node addons-916083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0115 14:02:16.071665 4002183 kubeadm.go:322] [bootstrap-token] Using token: s4mcyt.e4j4waoo0vgsvs3m
	I0115 14:02:16.073687 4002183 out.go:204]   - Configuring RBAC rules ...
	I0115 14:02:16.073815 4002183 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 14:02:16.078902 4002183 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 14:02:16.087163 4002183 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 14:02:16.090901 4002183 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 14:02:16.095692 4002183 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 14:02:16.100936 4002183 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 14:02:16.115637 4002183 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 14:02:16.361487 4002183 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 14:02:16.483057 4002183 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 14:02:16.484286 4002183 kubeadm.go:322] 
	I0115 14:02:16.484359 4002183 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 14:02:16.484373 4002183 kubeadm.go:322] 
	I0115 14:02:16.484447 4002183 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 14:02:16.484456 4002183 kubeadm.go:322] 
	I0115 14:02:16.484481 4002183 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 14:02:16.484704 4002183 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 14:02:16.484764 4002183 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 14:02:16.484777 4002183 kubeadm.go:322] 
	I0115 14:02:16.484829 4002183 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0115 14:02:16.484838 4002183 kubeadm.go:322] 
	I0115 14:02:16.484883 4002183 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0115 14:02:16.484892 4002183 kubeadm.go:322] 
	I0115 14:02:16.484942 4002183 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 14:02:16.485016 4002183 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 14:02:16.485087 4002183 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 14:02:16.485097 4002183 kubeadm.go:322] 
	I0115 14:02:16.485176 4002183 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 14:02:16.485251 4002183 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 14:02:16.485260 4002183 kubeadm.go:322] 
	I0115 14:02:16.485350 4002183 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token s4mcyt.e4j4waoo0vgsvs3m \
	I0115 14:02:16.485452 4002183 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7a6d785f4518c70e5cb54aff2b25c2e4257d667a1215c730d9bd23381d7f6388 \
	I0115 14:02:16.485477 4002183 kubeadm.go:322] 	--control-plane 
	I0115 14:02:16.485482 4002183 kubeadm.go:322] 
	I0115 14:02:16.485568 4002183 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 14:02:16.485579 4002183 kubeadm.go:322] 
	I0115 14:02:16.485657 4002183 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token s4mcyt.e4j4waoo0vgsvs3m \
	I0115 14:02:16.485756 4002183 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:7a6d785f4518c70e5cb54aff2b25c2e4257d667a1215c730d9bd23381d7f6388 
	I0115 14:02:16.488943 4002183 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0115 14:02:16.489056 4002183 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 14:02:16.489078 4002183 cni.go:84] Creating CNI manager for ""
	I0115 14:02:16.489087 4002183 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:02:16.491454 4002183 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 14:02:16.493520 4002183 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 14:02:16.503580 4002183 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0115 14:02:16.503600 4002183 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 14:02:16.534027 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 14:02:17.433303 4002183 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 14:02:17.433490 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:17.433614 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e minikube.k8s.io/name=addons-916083 minikube.k8s.io/updated_at=2024_01_15T14_02_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:17.451120 4002183 ops.go:34] apiserver oom_adj: -16
	I0115 14:02:17.584748 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:18.084811 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:18.585426 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:19.085032 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:19.585055 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:20.084873 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:20.584886 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:21.085232 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:21.585555 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:22.084918 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:22.584862 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:23.084907 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:23.585605 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:24.085758 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:24.585751 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:25.084885 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:25.585620 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:26.085764 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:26.584873 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:27.085436 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:27.585242 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:28.085044 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:28.584952 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:29.084865 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:29.585788 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:30.085156 4002183 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:02:30.179004 4002183 kubeadm.go:1088] duration metric: took 12.745572227s to wait for elevateKubeSystemPrivileges.
	I0115 14:02:30.179037 4002183 kubeadm.go:406] StartCluster complete in 32.167547669s
	I0115 14:02:30.179054 4002183 settings.go:142] acquiring lock: {Name:mkf7c3579062a76dbc15f21d34a0f70748bbdf8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:02:30.179796 4002183 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:02:30.180210 4002183 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/kubeconfig: {Name:mk3afa6cfd54a2e8849d9a076ecc839592eb1132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:02:30.180970 4002183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 14:02:30.181261 4002183 config.go:182] Loaded profile config "addons-916083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:02:30.181381 4002183 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0115 14:02:30.181465 4002183 addons.go:69] Setting yakd=true in profile "addons-916083"
	I0115 14:02:30.181482 4002183 addons.go:234] Setting addon yakd=true in "addons-916083"
	I0115 14:02:30.181538 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.182016 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.182485 4002183 addons.go:69] Setting cloud-spanner=true in profile "addons-916083"
	I0115 14:02:30.182503 4002183 addons.go:234] Setting addon cloud-spanner=true in "addons-916083"
	I0115 14:02:30.182535 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.182934 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.183358 4002183 addons.go:69] Setting metrics-server=true in profile "addons-916083"
	I0115 14:02:30.183381 4002183 addons.go:234] Setting addon metrics-server=true in "addons-916083"
	I0115 14:02:30.183413 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.183803 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.184229 4002183 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-916083"
	I0115 14:02:30.184270 4002183 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-916083"
	I0115 14:02:30.184300 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.184682 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.194292 4002183 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-916083"
	I0115 14:02:30.194584 4002183 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-916083"
	I0115 14:02:30.194641 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.197485 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.199331 4002183 addons.go:69] Setting default-storageclass=true in profile "addons-916083"
	I0115 14:02:30.199369 4002183 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-916083"
	I0115 14:02:30.199678 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.206452 4002183 addons.go:69] Setting registry=true in profile "addons-916083"
	I0115 14:02:30.206527 4002183 addons.go:234] Setting addon registry=true in "addons-916083"
	I0115 14:02:30.206606 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.207083 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.216505 4002183 addons.go:69] Setting gcp-auth=true in profile "addons-916083"
	I0115 14:02:30.216544 4002183 mustload.go:65] Loading cluster: addons-916083
	I0115 14:02:30.216784 4002183 config.go:182] Loaded profile config "addons-916083": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:02:30.217038 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.217467 4002183 addons.go:69] Setting storage-provisioner=true in profile "addons-916083"
	I0115 14:02:30.217490 4002183 addons.go:234] Setting addon storage-provisioner=true in "addons-916083"
	I0115 14:02:30.217539 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.217931 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.229688 4002183 addons.go:69] Setting ingress=true in profile "addons-916083"
	I0115 14:02:30.229782 4002183 addons.go:234] Setting addon ingress=true in "addons-916083"
	I0115 14:02:30.229900 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.230603 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.252461 4002183 addons.go:69] Setting ingress-dns=true in profile "addons-916083"
	I0115 14:02:30.252499 4002183 addons.go:234] Setting addon ingress-dns=true in "addons-916083"
	I0115 14:02:30.252553 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.253043 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.253339 4002183 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-916083"
	I0115 14:02:30.253377 4002183 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-916083"
	I0115 14:02:30.253677 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.264969 4002183 addons.go:69] Setting volumesnapshots=true in profile "addons-916083"
	I0115 14:02:30.265004 4002183 addons.go:234] Setting addon volumesnapshots=true in "addons-916083"
	I0115 14:02:30.265061 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.265523 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.265673 4002183 addons.go:69] Setting inspektor-gadget=true in profile "addons-916083"
	I0115 14:02:30.265688 4002183 addons.go:234] Setting addon inspektor-gadget=true in "addons-916083"
	I0115 14:02:30.265717 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.266084 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.427036 4002183 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0115 14:02:30.431026 4002183 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0115 14:02:30.434001 4002183 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0115 14:02:30.446147 4002183 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 14:02:30.446211 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0115 14:02:30.446304 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.431307 4002183 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0115 14:02:30.472179 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0115 14:02:30.472418 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.477935 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0115 14:02:30.482695 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0115 14:02:30.486142 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0115 14:02:30.489808 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0115 14:02:30.491901 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0115 14:02:30.494337 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0115 14:02:30.432454 4002183 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-916083"
	I0115 14:02:30.433977 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0115 14:02:30.482566 4002183 addons.go:234] Setting addon default-storageclass=true in "addons-916083"
	I0115 14:02:30.431314 4002183 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0115 14:02:30.494269 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.496175 4002183 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0115 14:02:30.496219 4002183 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0115 14:02:30.496249 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.496255 4002183 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 14:02:30.496266 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0115 14:02:30.499554 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:30.499566 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0115 14:02:30.499570 4002183 out.go:177]   - Using image docker.io/registry:2.8.3
	I0115 14:02:30.502537 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0115 14:02:30.503098 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.503137 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.504987 4002183 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0115 14:02:30.505502 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:30.507138 4002183 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0115 14:02:30.509486 4002183 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 14:02:30.511771 4002183 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 14:02:30.515221 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0115 14:02:30.515303 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0115 14:02:30.523784 4002183 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 14:02:30.523847 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0115 14:02:30.523854 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 14:02:30.531966 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0115 14:02:30.531989 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0115 14:02:30.534181 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.534191 4002183 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0115 14:02:30.543019 4002183 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 14:02:30.536720 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0115 14:02:30.536734 4002183 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0115 14:02:30.536801 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.536829 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.537558 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.550184 4002183 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 14:02:30.550273 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.582041 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0115 14:02:30.582067 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0115 14:02:30.582182 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.554095 4002183 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0115 14:02:30.605878 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0115 14:02:30.605958 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.623615 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.630924 4002183 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 14:02:30.630951 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 14:02:30.631016 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.578248 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0115 14:02:30.634644 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.645972 4002183 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0115 14:02:30.653849 4002183 out.go:177]   - Using image docker.io/busybox:stable
	I0115 14:02:30.655964 4002183 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 14:02:30.655987 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0115 14:02:30.656055 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:30.644671 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.695417 4002183 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-916083" context rescaled to 1 replicas
	I0115 14:02:30.695455 4002183 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 14:02:30.697631 4002183 out.go:177] * Verifying Kubernetes components...
	I0115 14:02:30.701895 4002183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 14:02:30.702352 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.791402 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.811387 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.831460 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.856328 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.868827 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.872813 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.879425 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.888950 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.907641 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:30.919157 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	W0115 14:02:30.920975 4002183 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0115 14:02:30.921008 4002183 retry.go:31] will retry after 319.272093ms: ssh: handshake failed: EOF
	I0115 14:02:31.067250 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0115 14:02:31.075713 4002183 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0115 14:02:31.075776 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0115 14:02:31.109908 4002183 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 14:02:31.110818 4002183 node_ready.go:35] waiting up to 6m0s for node "addons-916083" to be "Ready" ...
	I0115 14:02:31.115448 4002183 node_ready.go:49] node "addons-916083" has status "Ready":"True"
	I0115 14:02:31.115515 4002183 node_ready.go:38] duration metric: took 4.605127ms waiting for node "addons-916083" to be "Ready" ...
	I0115 14:02:31.115539 4002183 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 14:02:31.124610 4002183 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6pkbg" in "kube-system" namespace to be "Ready" ...
	I0115 14:02:31.217234 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0115 14:02:31.217384 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0115 14:02:31.237417 4002183 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0115 14:02:31.237487 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0115 14:02:31.257409 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 14:02:31.279060 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0115 14:02:31.279084 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0115 14:02:31.288406 4002183 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0115 14:02:31.288473 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0115 14:02:31.397940 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 14:02:31.439013 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0115 14:02:31.576583 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0115 14:02:31.576659 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0115 14:02:31.603643 4002183 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 14:02:31.603726 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0115 14:02:31.615809 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0115 14:02:31.615880 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0115 14:02:31.639514 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0115 14:02:31.740479 4002183 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0115 14:02:31.740553 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0115 14:02:31.765595 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0115 14:02:31.765668 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0115 14:02:31.772237 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0115 14:02:31.774208 4002183 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0115 14:02:31.774263 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0115 14:02:31.866153 4002183 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0115 14:02:31.866217 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0115 14:02:31.880021 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0115 14:02:31.880094 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0115 14:02:31.893082 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0115 14:02:31.929612 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0115 14:02:31.998791 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0115 14:02:31.998812 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0115 14:02:32.004920 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0115 14:02:32.069926 4002183 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0115 14:02:32.069959 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0115 14:02:32.091798 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0115 14:02:32.091869 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0115 14:02:32.128238 4002183 pod_ready.go:97] error getting pod "coredns-5dd5756b68-6pkbg" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6pkbg" not found
	I0115 14:02:32.128315 4002183 pod_ready.go:81] duration metric: took 1.003632118s waiting for pod "coredns-5dd5756b68-6pkbg" in "kube-system" namespace to be "Ready" ...
	E0115 14:02:32.128340 4002183 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-6pkbg" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-6pkbg" not found
	I0115 14:02:32.128360 4002183 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace to be "Ready" ...
	I0115 14:02:32.173025 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0115 14:02:32.173096 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0115 14:02:32.333659 4002183 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0115 14:02:32.333729 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0115 14:02:32.337879 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0115 14:02:32.337942 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0115 14:02:32.429002 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0115 14:02:32.429063 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0115 14:02:32.458649 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0115 14:02:32.458716 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0115 14:02:32.639091 4002183 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 14:02:32.639165 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0115 14:02:32.651844 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0115 14:02:32.719273 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0115 14:02:32.719300 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0115 14:02:32.763014 4002183 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0115 14:02:32.763041 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0115 14:02:32.945009 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0115 14:02:32.945036 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0115 14:02:32.967873 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 14:02:33.104524 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0115 14:02:33.104549 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0115 14:02:33.298765 4002183 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 14:02:33.298791 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0115 14:02:33.385597 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0115 14:02:33.385628 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0115 14:02:33.485171 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0115 14:02:33.597515 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0115 14:02:33.597542 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0115 14:02:33.826956 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0115 14:02:33.826982 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0115 14:02:34.136413 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:34.177295 4002183 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 14:02:34.177323 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0115 14:02:34.313976 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0115 14:02:34.431481 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.364198228s)
	I0115 14:02:34.431544 4002183 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.321563307s)
	I0115 14:02:34.431558 4002183 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0115 14:02:35.121313 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.863833163s)
	I0115 14:02:35.121403 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.723405041s)
	I0115 14:02:35.121632 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.682553576s)
	I0115 14:02:35.121681 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.482100843s)
	I0115 14:02:35.886024 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.113707315s)
	I0115 14:02:36.164815 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:37.316295 4002183 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0115 14:02:37.316379 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:37.359340 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:37.573960 4002183 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0115 14:02:37.644519 4002183 addons.go:234] Setting addon gcp-auth=true in "addons-916083"
	I0115 14:02:37.644607 4002183 host.go:66] Checking if "addons-916083" exists ...
	I0115 14:02:37.645099 4002183 cli_runner.go:164] Run: docker container inspect addons-916083 --format={{.State.Status}}
	I0115 14:02:37.690416 4002183 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0115 14:02:37.690477 4002183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-916083
	I0115 14:02:37.743440 4002183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36439 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/addons-916083/id_rsa Username:docker}
	I0115 14:02:38.442111 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.548944812s)
	I0115 14:02:38.442184 4002183 addons.go:470] Verifying addon ingress=true in "addons-916083"
	I0115 14:02:38.446729 4002183 out.go:177] * Verifying ingress addon...
	I0115 14:02:38.442425 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.512737431s)
	I0115 14:02:38.442465 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.43752348s)
	I0115 14:02:38.442506 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.790587215s)
	I0115 14:02:38.442580 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.474679805s)
	I0115 14:02:38.442652 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.95745613s)
	I0115 14:02:38.449018 4002183 addons.go:470] Verifying addon registry=true in "addons-916083"
	I0115 14:02:38.455589 4002183 out.go:177] * Verifying registry addon...
	W0115 14:02:38.449206 4002183 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 14:02:38.449428 4002183 addons.go:470] Verifying addon metrics-server=true in "addons-916083"
	I0115 14:02:38.450205 4002183 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0115 14:02:38.459344 4002183 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0115 14:02:38.459497 4002183 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-916083 service yakd-dashboard -n yakd-dashboard
	
	I0115 14:02:38.459566 4002183 retry.go:31] will retry after 306.433825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0115 14:02:38.464198 4002183 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0115 14:02:38.475568 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:38.466216 4002183 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0115 14:02:38.475593 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:38.637726 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:38.783122 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0115 14:02:38.966427 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:38.973621 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:39.490270 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:39.491614 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:39.965960 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.651934566s)
	I0115 14:02:39.966033 4002183 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-916083"
	I0115 14:02:39.968384 4002183 out.go:177] * Verifying csi-hostpath-driver addon...
	I0115 14:02:39.966225 4002183 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.275780011s)
	I0115 14:02:39.970904 4002183 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0115 14:02:39.971665 4002183 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0115 14:02:39.972935 4002183 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0115 14:02:39.974551 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:39.975450 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:39.976482 4002183 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0115 14:02:39.976573 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0115 14:02:39.994622 4002183 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0115 14:02:39.994687 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:40.057810 4002183 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0115 14:02:40.057885 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0115 14:02:40.128183 4002183 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 14:02:40.128257 4002183 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0115 14:02:40.202859 4002183 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0115 14:02:40.464698 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:40.467197 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:40.487538 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:40.540337 4002183 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.757168101s)
	I0115 14:02:40.965977 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:40.966380 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:40.980243 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:41.136767 4002183 addons.go:470] Verifying addon gcp-auth=true in "addons-916083"
	I0115 14:02:41.140244 4002183 out.go:177] * Verifying gcp-auth addon...
	I0115 14:02:41.143848 4002183 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0115 14:02:41.150059 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:41.150532 4002183 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0115 14:02:41.150551 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:41.471608 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:41.484048 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:41.485337 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:41.648008 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:41.965688 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:41.966843 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:41.979719 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:42.149317 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:42.466130 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:42.468536 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:42.479093 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:42.648948 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:42.962514 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:42.964819 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:42.979334 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:43.148780 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:43.463025 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:43.463908 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:43.478554 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:43.635497 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:43.648189 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:43.964033 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:43.965595 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:43.979992 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:44.147904 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:44.464533 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:44.467463 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:44.480006 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:44.648881 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:44.964848 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:44.967426 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:44.979380 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:45.149020 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:45.463360 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:45.466983 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:45.479138 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:45.635643 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:45.648712 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:45.965123 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:45.965642 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:45.979180 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:46.148529 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:46.463125 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:46.464459 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:46.479129 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:46.648730 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:46.963940 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:46.965097 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:46.979009 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:47.148585 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:47.464032 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:47.468388 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:47.483735 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:47.636949 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:47.649476 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:47.965060 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:47.966003 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:47.978606 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:48.148371 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:48.462780 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:48.465672 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:48.479160 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:48.647424 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:48.971833 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:48.972007 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:48.979327 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:49.148682 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:49.463782 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:49.464760 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:49.479793 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:49.648975 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:49.963149 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:49.965757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:49.979036 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:50.135860 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:50.147536 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:50.462787 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:50.464556 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:50.478400 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:50.647681 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:50.962629 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:50.963836 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:50.978605 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:51.147653 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:51.463735 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:51.464020 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:51.478921 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:51.647415 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:51.962569 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:51.964914 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:51.978519 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:52.147294 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:52.463596 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:52.465826 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:52.478423 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:52.634786 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:52.647942 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:52.964960 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:52.966109 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:52.978872 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:53.147465 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:53.464581 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:53.465678 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:53.478472 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:53.647376 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:53.964344 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:53.964663 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:53.978865 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:54.147589 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:54.464218 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:54.465157 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:54.478359 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:54.634909 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:54.647757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:54.964558 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:54.965997 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:54.978629 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:55.147469 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:55.464749 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:55.465893 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:55.480076 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:55.647462 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:55.962451 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:55.964595 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:55.978611 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:56.147553 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:56.463584 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:56.464975 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:56.478756 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:56.635132 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:56.647361 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:56.963538 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:56.965124 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:56.979172 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:57.147497 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:57.463865 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:57.464302 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:57.479427 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:57.648202 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:57.962757 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:57.963645 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:57.978937 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:58.148351 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:58.462753 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:58.464791 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:58.478418 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:58.647941 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:58.962253 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:58.964239 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:58.978704 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:59.134833 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:02:59.147845 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:59.462796 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:59.464898 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:59.477884 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:02:59.648408 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:02:59.962252 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:02:59.964691 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:02:59.978090 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:00.148508 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:00.463088 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:00.464099 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:00.479129 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:00.648201 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:00.963280 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:00.964150 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:00.978610 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:01.147163 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:01.465607 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:01.467438 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:01.479126 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:01.639383 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:01.648241 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:01.963893 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:01.964429 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:01.978960 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:02.149449 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:02.466254 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:02.467412 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:02.479519 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:02.648727 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:02.966791 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:02.967846 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:02.979875 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:03.147720 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:03.465652 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:03.466837 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:03.478712 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:03.648141 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:03.966441 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:03.967511 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:03.979682 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:04.136932 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:04.148324 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:04.464943 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:04.465866 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:04.478613 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:04.648029 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:04.963669 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:04.964535 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:04.978461 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:05.147838 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:05.464070 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:05.465291 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:05.479509 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:05.648729 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:05.968357 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:05.969305 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:05.980156 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:06.148033 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:06.462622 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:06.464996 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:06.478805 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:06.636037 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:06.648327 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:06.965290 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:06.967224 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:06.978793 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:07.148024 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:07.463703 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:07.464712 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:07.480115 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:07.649063 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:07.965754 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:07.966850 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:07.979869 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:08.147499 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:08.463673 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:08.465794 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:08.478121 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:08.647549 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:08.967099 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:08.968152 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:08.981447 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:09.135332 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:09.147118 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:09.464459 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:09.465037 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:09.478817 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:09.648244 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:09.964807 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:09.965667 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:09.979066 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:10.147571 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:10.463743 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:10.464582 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:10.481811 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:10.648197 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:10.985997 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:10.989596 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:10.995467 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:11.136784 4002183 pod_ready.go:102] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"False"
	I0115 14:03:11.148803 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:11.466785 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:11.468118 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:11.480715 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:11.648878 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:11.976133 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:11.976922 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:11.994571 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:12.138306 4002183 pod_ready.go:92] pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.138382 4002183 pod_ready.go:81] duration metric: took 40.009981094s waiting for pod "coredns-5dd5756b68-nbgjt" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.138410 4002183 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.162682 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:12.164739 4002183 pod_ready.go:92] pod "etcd-addons-916083" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.164803 4002183 pod_ready.go:81] duration metric: took 26.371494ms waiting for pod "etcd-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.164833 4002183 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.181087 4002183 pod_ready.go:92] pod "kube-apiserver-addons-916083" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.181162 4002183 pod_ready.go:81] duration metric: took 16.306823ms waiting for pod "kube-apiserver-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.181189 4002183 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.199261 4002183 pod_ready.go:92] pod "kube-controller-manager-addons-916083" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.199329 4002183 pod_ready.go:81] duration metric: took 18.119037ms waiting for pod "kube-controller-manager-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.199356 4002183 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fs7hg" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.214973 4002183 pod_ready.go:92] pod "kube-proxy-fs7hg" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.215046 4002183 pod_ready.go:81] duration metric: took 15.655232ms waiting for pod "kube-proxy-fs7hg" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.215076 4002183 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.466644 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:12.467867 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:12.478755 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:12.532994 4002183 pod_ready.go:92] pod "kube-scheduler-addons-916083" in "kube-system" namespace has status "Ready":"True"
	I0115 14:03:12.533027 4002183 pod_ready.go:81] duration metric: took 317.928809ms waiting for pod "kube-scheduler-addons-916083" in "kube-system" namespace to be "Ready" ...
	I0115 14:03:12.533038 4002183 pod_ready.go:38] duration metric: took 41.417475935s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 14:03:12.533052 4002183 api_server.go:52] waiting for apiserver process to appear ...
	I0115 14:03:12.533117 4002183 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 14:03:12.551491 4002183 api_server.go:72] duration metric: took 41.856008876s to wait for apiserver process to appear ...
	I0115 14:03:12.551518 4002183 api_server.go:88] waiting for apiserver healthz status ...
	I0115 14:03:12.551539 4002183 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0115 14:03:12.561202 4002183 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0115 14:03:12.562577 4002183 api_server.go:141] control plane version: v1.28.4
	I0115 14:03:12.562604 4002183 api_server.go:131] duration metric: took 11.078928ms to wait for apiserver health ...
	I0115 14:03:12.562619 4002183 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 14:03:12.648546 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:12.740629 4002183 system_pods.go:59] 18 kube-system pods found
	I0115 14:03:12.740667 4002183 system_pods.go:61] "coredns-5dd5756b68-nbgjt" [43a41f50-fc86-4450-92f0-647531dfb3a6] Running
	I0115 14:03:12.740678 4002183 system_pods.go:61] "csi-hostpath-attacher-0" [d28d28f1-dd34-4a36-b1c0-9a2f48d68a02] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 14:03:12.740687 4002183 system_pods.go:61] "csi-hostpath-resizer-0" [6450d9dd-310b-4f7e-8c36-9376ababd82d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 14:03:12.740698 4002183 system_pods.go:61] "csi-hostpathplugin-j5mdh" [a2652475-1a19-4825-b0df-29a7c90b5c6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 14:03:12.740707 4002183 system_pods.go:61] "etcd-addons-916083" [ecd374c3-b2bc-43f9-9ffb-f1e90f3e56a5] Running
	I0115 14:03:12.740713 4002183 system_pods.go:61] "kindnet-6r7md" [04c8cf3d-7c92-4d8a-a7e2-b7c376d3eb7b] Running
	I0115 14:03:12.740724 4002183 system_pods.go:61] "kube-apiserver-addons-916083" [5a630a36-4424-4b9e-9583-9bfe87adb3ff] Running
	I0115 14:03:12.740729 4002183 system_pods.go:61] "kube-controller-manager-addons-916083" [64ae8a0e-7851-490c-899a-d987c1708fa0] Running
	I0115 14:03:12.740737 4002183 system_pods.go:61] "kube-ingress-dns-minikube" [2106a856-cb65-4ce7-84ae-6bc223f27497] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0115 14:03:12.740747 4002183 system_pods.go:61] "kube-proxy-fs7hg" [e6f3d1de-7ff6-4630-b33c-5511a78fe470] Running
	I0115 14:03:12.740752 4002183 system_pods.go:61] "kube-scheduler-addons-916083" [8c5b2462-756f-45c2-bdb4-303bf46fa948] Running
	I0115 14:03:12.740759 4002183 system_pods.go:61] "metrics-server-7c66d45ddc-2qp4d" [d0a4b682-7faf-459b-a7d0-8873c8b2db17] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0115 14:03:12.740776 4002183 system_pods.go:61] "nvidia-device-plugin-daemonset-dj78p" [10888201-3bd5-457a-aa04-7bc6a2d2dc6a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0115 14:03:12.740784 4002183 system_pods.go:61] "registry-htcrm" [51ffa260-a633-46c3-8d2c-1a9690503666] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0115 14:03:12.740790 4002183 system_pods.go:61] "registry-proxy-74zd5" [f108cc01-7802-4b5f-8935-c829e0ac2f02] Running
	I0115 14:03:12.740798 4002183 system_pods.go:61] "snapshot-controller-58dbcc7b99-bzhl5" [dd38a8e6-1095-44b6-a257-7322dd8369e7] Running
	I0115 14:03:12.740803 4002183 system_pods.go:61] "snapshot-controller-58dbcc7b99-szsw9" [c542b9d8-bd4a-48a2-8471-8e1b6a2b2cf8] Running
	I0115 14:03:12.740811 4002183 system_pods.go:61] "storage-provisioner" [175a8490-3dc2-47a2-a5bf-54717b94f58b] Running
	I0115 14:03:12.740822 4002183 system_pods.go:74] duration metric: took 178.196575ms to wait for pod list to return data ...
	I0115 14:03:12.740831 4002183 default_sa.go:34] waiting for default service account to be created ...
	I0115 14:03:12.931643 4002183 default_sa.go:45] found service account: "default"
	I0115 14:03:12.931672 4002183 default_sa.go:55] duration metric: took 190.82905ms for default service account to be created ...
	I0115 14:03:12.931682 4002183 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 14:03:12.970241 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:12.971455 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:12.990736 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:13.140990 4002183 system_pods.go:86] 18 kube-system pods found
	I0115 14:03:13.141070 4002183 system_pods.go:89] "coredns-5dd5756b68-nbgjt" [43a41f50-fc86-4450-92f0-647531dfb3a6] Running
	I0115 14:03:13.141088 4002183 system_pods.go:89] "csi-hostpath-attacher-0" [d28d28f1-dd34-4a36-b1c0-9a2f48d68a02] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0115 14:03:13.141097 4002183 system_pods.go:89] "csi-hostpath-resizer-0" [6450d9dd-310b-4f7e-8c36-9376ababd82d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0115 14:03:13.141106 4002183 system_pods.go:89] "csi-hostpathplugin-j5mdh" [a2652475-1a19-4825-b0df-29a7c90b5c6d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0115 14:03:13.141115 4002183 system_pods.go:89] "etcd-addons-916083" [ecd374c3-b2bc-43f9-9ffb-f1e90f3e56a5] Running
	I0115 14:03:13.141121 4002183 system_pods.go:89] "kindnet-6r7md" [04c8cf3d-7c92-4d8a-a7e2-b7c376d3eb7b] Running
	I0115 14:03:13.141129 4002183 system_pods.go:89] "kube-apiserver-addons-916083" [5a630a36-4424-4b9e-9583-9bfe87adb3ff] Running
	I0115 14:03:13.141136 4002183 system_pods.go:89] "kube-controller-manager-addons-916083" [64ae8a0e-7851-490c-899a-d987c1708fa0] Running
	I0115 14:03:13.141144 4002183 system_pods.go:89] "kube-ingress-dns-minikube" [2106a856-cb65-4ce7-84ae-6bc223f27497] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0115 14:03:13.141151 4002183 system_pods.go:89] "kube-proxy-fs7hg" [e6f3d1de-7ff6-4630-b33c-5511a78fe470] Running
	I0115 14:03:13.141159 4002183 system_pods.go:89] "kube-scheduler-addons-916083" [8c5b2462-756f-45c2-bdb4-303bf46fa948] Running
	I0115 14:03:13.141165 4002183 system_pods.go:89] "metrics-server-7c66d45ddc-2qp4d" [d0a4b682-7faf-459b-a7d0-8873c8b2db17] Running
	I0115 14:03:13.141176 4002183 system_pods.go:89] "nvidia-device-plugin-daemonset-dj78p" [10888201-3bd5-457a-aa04-7bc6a2d2dc6a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0115 14:03:13.141182 4002183 system_pods.go:89] "registry-htcrm" [51ffa260-a633-46c3-8d2c-1a9690503666] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0115 14:03:13.141194 4002183 system_pods.go:89] "registry-proxy-74zd5" [f108cc01-7802-4b5f-8935-c829e0ac2f02] Running
	I0115 14:03:13.141200 4002183 system_pods.go:89] "snapshot-controller-58dbcc7b99-bzhl5" [dd38a8e6-1095-44b6-a257-7322dd8369e7] Running
	I0115 14:03:13.141208 4002183 system_pods.go:89] "snapshot-controller-58dbcc7b99-szsw9" [c542b9d8-bd4a-48a2-8471-8e1b6a2b2cf8] Running
	I0115 14:03:13.141212 4002183 system_pods.go:89] "storage-provisioner" [175a8490-3dc2-47a2-a5bf-54717b94f58b] Running
	I0115 14:03:13.141219 4002183 system_pods.go:126] duration metric: took 209.53212ms to wait for k8s-apps to be running ...
	I0115 14:03:13.141336 4002183 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 14:03:13.141422 4002183 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 14:03:13.148863 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:13.184887 4002183 system_svc.go:56] duration metric: took 43.542727ms WaitForService to wait for kubelet.
	I0115 14:03:13.184916 4002183 kubeadm.go:581] duration metric: took 42.489439437s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 14:03:13.184941 4002183 node_conditions.go:102] verifying NodePressure condition ...
	I0115 14:03:13.332174 4002183 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0115 14:03:13.332208 4002183 node_conditions.go:123] node cpu capacity is 2
	I0115 14:03:13.332222 4002183 node_conditions.go:105] duration metric: took 147.27564ms to run NodePressure ...
	I0115 14:03:13.332233 4002183 start.go:228] waiting for startup goroutines ...
	I0115 14:03:13.465734 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:13.466285 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:13.480178 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:13.648281 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:13.965029 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:13.966267 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:13.979961 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:14.148460 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:14.463008 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:14.465501 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:14.479070 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:14.647680 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:14.971737 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:14.972514 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:14.989073 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:15.148022 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:15.462717 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:15.465931 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0115 14:03:15.480166 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:15.648072 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:15.964704 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:15.967957 4002183 kapi.go:107] duration metric: took 37.508611211s to wait for kubernetes.io/minikube-addons=registry ...
	I0115 14:03:15.981103 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:16.148184 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:16.464011 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:16.478780 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:16.647405 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:16.971041 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:16.990931 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:17.147750 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:17.463766 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:17.479340 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:17.648152 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:17.964122 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:17.979620 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:18.148521 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:18.463611 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:18.479302 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:18.648430 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:18.970001 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:18.981634 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:19.148409 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:19.462840 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:19.479156 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:19.648031 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:19.963219 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:19.978654 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:20.148751 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:20.466790 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:20.479808 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:20.647602 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:20.962625 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:20.979978 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:21.148350 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:21.463225 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:21.478872 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:21.647688 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:21.962717 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:21.979583 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:22.148562 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:22.462911 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:22.479398 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:22.647817 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:22.963116 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:22.980014 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:23.147755 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:23.463329 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:23.479815 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:23.647377 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:23.962528 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:23.981757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:24.147873 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:24.463742 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:24.479065 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:24.648127 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:24.963178 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:24.978350 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:25.148528 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:25.462757 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:25.479585 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:25.648158 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:25.963231 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:25.981623 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:26.148368 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:26.466221 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:26.481212 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:26.648225 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:26.962357 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:26.979438 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:27.149005 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:27.462325 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:27.478639 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:27.648220 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:27.962482 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:27.978546 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:28.148719 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:28.464009 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:28.479929 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:28.648046 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:28.963321 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:28.978523 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:29.148347 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:29.462898 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:29.480304 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:29.647757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:29.963547 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:29.978969 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:30.147905 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:30.463502 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:30.479936 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:30.647965 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:30.963013 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:30.978116 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:31.147176 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:31.463576 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:31.479137 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:31.648043 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:31.963131 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:31.979071 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:32.147816 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:32.464083 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:32.478760 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:32.650823 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:32.963149 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:32.985880 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:33.147636 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:33.462820 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:33.478470 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:33.648550 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:33.963350 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:33.978781 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:34.148358 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:34.463204 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:34.478720 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:34.648522 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:34.963277 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:34.979651 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:35.147396 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:35.463292 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:35.478726 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:35.647372 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:35.963036 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:35.978557 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:36.148767 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:36.463438 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:36.479077 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:36.647689 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:36.963864 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:36.980093 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0115 14:03:37.147797 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:37.463522 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:37.482284 4002183 kapi.go:107] duration metric: took 57.51061781s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0115 14:03:37.648282 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:37.963271 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:38.148139 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:38.463388 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:38.648417 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:38.963148 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:39.147792 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:39.463103 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:39.648133 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:39.962894 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:40.147576 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:40.462680 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:40.647677 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:40.963082 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:41.147757 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:41.463063 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:41.647633 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:41.962681 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:42.148960 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:42.464085 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:42.647794 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:42.963208 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:43.148814 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:43.463642 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:43.648147 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:43.963152 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:44.149049 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:44.462599 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:44.653408 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:44.964747 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:45.148889 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:45.463452 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:45.647768 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:45.963749 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:46.148914 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:46.465176 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:46.647979 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:46.963441 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:47.149365 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:47.463342 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:47.648482 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:47.964336 4002183 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0115 14:03:48.149279 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:48.464016 4002183 kapi.go:107] duration metric: took 1m10.013808165s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0115 14:03:48.647832 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:49.148654 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:49.647487 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:50.147585 4002183 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0115 14:03:50.647569 4002183 kapi.go:107] duration metric: took 1m9.503716965s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0115 14:03:50.649776 4002183 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-916083 cluster.
	I0115 14:03:50.651701 4002183 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0115 14:03:50.653504 4002183 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0115 14:03:50.655691 4002183 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0115 14:03:50.657725 4002183 addons.go:505] enable addons completed in 1m20.476337186s: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0115 14:03:50.657771 4002183 start.go:233] waiting for cluster config update ...
	I0115 14:03:50.657791 4002183 start.go:242] writing updated cluster config ...
	I0115 14:03:50.658094 4002183 ssh_runner.go:195] Run: rm -f paused
	I0115 14:03:51.002489 4002183 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0115 14:03:51.004648 4002183 out.go:177] * Done! kubectl is now configured to use "addons-916083" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	24cee9d0b20fc       23466caa55cb7       3 seconds ago        Exited              busybox                   0                   e3cf96895c850       test-local-path
	66e68d5d040ff       fc9db2894f4e4       8 seconds ago        Exited              helper-pod                0                   5ce403e0a01b9       helper-pod-create-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050
	e72a9f7597941       dd1b12fcb6097       20 seconds ago       Exited              hello-world-app           2                   47ff6f58ce48a       hello-world-app-5d77478584-wqrqg
	044423708daeb       74077e780ec71       45 seconds ago       Running             nginx                     0                   4f91ae5950411       nginx
	cd8ec42b386f2       2a5f29343eb03       About a minute ago   Running             gcp-auth                  0                   5054f5f2561d3       gcp-auth-d4c87556c-kr5wf
	23a7123973ef9       af594c6a879f2       About a minute ago   Exited              patch                     2                   b633437005e96       ingress-nginx-admission-patch-qs9bf
	581091146b79d       20e3f2db01e81       About a minute ago   Running             yakd                      0                   f21746e6bb044       yakd-dashboard-9947fc6bf-hqlhr
	163ffe68b1e25       af594c6a879f2       About a minute ago   Exited              create                    0                   cf50bb977cfc3       ingress-nginx-admission-create-m2dh4
	ffc6fd4d1596d       a89778274bf53       About a minute ago   Running             cloud-spanner-emulator    0                   7659c9ad58d08       cloud-spanner-emulator-64c8c85f65-c8qfb
	0509b1d6488f6       97e04611ad434       2 minutes ago        Running             coredns                   0                   0d2dd25a562ef       coredns-5dd5756b68-nbgjt
	8ca38fbc3ac83       7ce2150c8929b       2 minutes ago        Running             local-path-provisioner    0                   2fc2f53014086       local-path-provisioner-78b46b4d5c-ddfls
	8702bfee57bb9       ba04bb24b9575       2 minutes ago        Running             storage-provisioner       0                   50ab4c2d4d0de       storage-provisioner
	631d60ffe727c       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                0                   72a1079e82ac7       kube-proxy-fs7hg
	cc48c18dbdb99       04b4eaa3d3db8       2 minutes ago        Running             kindnet-cni               0                   e334ffd6cac06       kindnet-6r7md
	ab42205eeee75       05c284c929889       3 minutes ago        Running             kube-scheduler            0                   0ca40721db9e6       kube-scheduler-addons-916083
	03bb27f5cf55e       9961cbceaf234       3 minutes ago        Running             kube-controller-manager   0                   aa123d8449420       kube-controller-manager-addons-916083
	4b4fb5cb9a74f       04b4c447bb9d4       3 minutes ago        Running             kube-apiserver            0                   33b8781915870       kube-apiserver-addons-916083
	1cc454d90602a       9cdd6470f48c8       3 minutes ago        Running             etcd                      0                   29d33a5b6d8da       etcd-addons-916083
	
	
	==> containerd <==
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.146845676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-local-path,Uid:1d543c57-f8ba-4c5e-ab39-b29e99623a36,Namespace:default,Attempt:0,} returns sandbox id \"e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba\""
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.149192606Z" level=info msg="PullImage \"busybox:stable\""
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.151017146Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.328276084Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.914386911Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox:stable,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.918435759Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:23466caa55cb731e1404a17c8a35ac202c16cb952ff210d4ed50a7518b9e9559,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.923627714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/busybox:stable,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.927010201Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/library/busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.933009576Z" level=info msg="PullImage \"busybox:stable\" returns image reference \"sha256:23466caa55cb731e1404a17c8a35ac202c16cb952ff210d4ed50a7518b9e9559\""
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.939696349Z" level=info msg="CreateContainer within sandbox \"e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.965328092Z" level=info msg="CreateContainer within sandbox \"e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"24cee9d0b20fc2045b8438e0d9896199c366ae62f6a237830322b0f01bf8980a\""
	Jan 15 14:05:08 addons-916083 containerd[741]: time="2024-01-15T14:05:08.966425063Z" level=info msg="StartContainer for \"24cee9d0b20fc2045b8438e0d9896199c366ae62f6a237830322b0f01bf8980a\""
	Jan 15 14:05:09 addons-916083 containerd[741]: time="2024-01-15T14:05:09.029947992Z" level=info msg="StartContainer for \"24cee9d0b20fc2045b8438e0d9896199c366ae62f6a237830322b0f01bf8980a\" returns successfully"
	Jan 15 14:05:09 addons-916083 containerd[741]: time="2024-01-15T14:05:09.077380889Z" level=info msg="shim disconnected" id=24cee9d0b20fc2045b8438e0d9896199c366ae62f6a237830322b0f01bf8980a
	Jan 15 14:05:09 addons-916083 containerd[741]: time="2024-01-15T14:05:09.077441843Z" level=warning msg="cleaning up after shim disconnected" id=24cee9d0b20fc2045b8438e0d9896199c366ae62f6a237830322b0f01bf8980a namespace=k8s.io
	Jan 15 14:05:09 addons-916083 containerd[741]: time="2024-01-15T14:05:09.077454938Z" level=info msg="cleaning up dead shim"
	Jan 15 14:05:09 addons-916083 containerd[741]: time="2024-01-15T14:05:09.088045242Z" level=warning msg="cleanup warnings time=\"2024-01-15T14:05:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10815 runtime=io.containerd.runc.v2\n"
	Jan 15 14:05:10 addons-916083 containerd[741]: time="2024-01-15T14:05:10.928572026Z" level=info msg="StopPodSandbox for \"e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba\""
	Jan 15 14:05:10 addons-916083 containerd[741]: time="2024-01-15T14:05:10.928649882Z" level=info msg="Container to stop \"24cee9d0b20fc2045b8438e0d9896199c366ae62f6a237830322b0f01bf8980a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 15 14:05:10 addons-916083 containerd[741]: time="2024-01-15T14:05:10.974591086Z" level=info msg="shim disconnected" id=e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba
	Jan 15 14:05:10 addons-916083 containerd[741]: time="2024-01-15T14:05:10.974803577Z" level=warning msg="cleaning up after shim disconnected" id=e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba namespace=k8s.io
	Jan 15 14:05:10 addons-916083 containerd[741]: time="2024-01-15T14:05:10.974896645Z" level=info msg="cleaning up dead shim"
	Jan 15 14:05:10 addons-916083 containerd[741]: time="2024-01-15T14:05:10.990598882Z" level=warning msg="cleanup warnings time=\"2024-01-15T14:05:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=10865 runtime=io.containerd.runc.v2\n"
	Jan 15 14:05:11 addons-916083 containerd[741]: time="2024-01-15T14:05:11.028047947Z" level=info msg="TearDown network for sandbox \"e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba\" successfully"
	Jan 15 14:05:11 addons-916083 containerd[741]: time="2024-01-15T14:05:11.028230514Z" level=info msg="StopPodSandbox for \"e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba\" returns successfully"
	
	
	==> coredns [0509b1d6488f6866ef9630531c42875aed9eae5871a443cca13f897c6ca3cc30] <==
	[INFO] 10.244.0.19:53845 - 8820 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055251s
	[INFO] 10.244.0.19:53845 - 14787 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055095s
	[INFO] 10.244.0.19:53845 - 59218 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053652s
	[INFO] 10.244.0.19:53845 - 59513 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005549s
	[INFO] 10.244.0.19:53845 - 8722 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001013304s
	[INFO] 10.244.0.19:53845 - 1504 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00095057s
	[INFO] 10.244.0.19:53845 - 21962 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000083403s
	[INFO] 10.244.0.19:42424 - 7502 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105654s
	[INFO] 10.244.0.19:57422 - 56180 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000232297s
	[INFO] 10.244.0.19:42424 - 26269 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000074386s
	[INFO] 10.244.0.19:57422 - 2537 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00007825s
	[INFO] 10.244.0.19:42424 - 14864 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000083575s
	[INFO] 10.244.0.19:57422 - 4580 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097286s
	[INFO] 10.244.0.19:57422 - 6859 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060872s
	[INFO] 10.244.0.19:42424 - 9857 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089294s
	[INFO] 10.244.0.19:57422 - 33462 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046456s
	[INFO] 10.244.0.19:42424 - 59963 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069898s
	[INFO] 10.244.0.19:57422 - 27444 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097179s
	[INFO] 10.244.0.19:42424 - 51408 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066697s
	[INFO] 10.244.0.19:57422 - 42659 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0012632s
	[INFO] 10.244.0.19:42424 - 54964 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001593505s
	[INFO] 10.244.0.19:57422 - 62979 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000877743s
	[INFO] 10.244.0.19:42424 - 31305 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001187642s
	[INFO] 10.244.0.19:57422 - 44971 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006696s
	[INFO] 10.244.0.19:42424 - 18999 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034764s
	
	
	==> describe nodes <==
	Name:               addons-916083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-916083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e
	                    minikube.k8s.io/name=addons-916083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T14_02_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-916083
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 14:02:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-916083
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 14:05:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 14:04:49 +0000   Mon, 15 Jan 2024 14:02:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 14:04:49 +0000   Mon, 15 Jan 2024 14:02:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 14:04:49 +0000   Mon, 15 Jan 2024 14:02:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 14:04:49 +0000   Mon, 15 Jan 2024 14:02:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-916083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 004d54a504a441c4bf6d99550b0c9799
	  System UUID:                eb64688b-abbc-4ff4-af75-2a89f845e9c7
	  Boot ID:                    489f1f75-cead-4e0d-97ee-b5bdbf9f668e
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-c8qfb                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  default                     hello-world-app-5d77478584-wqrqg                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  default                     nginx                                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  gcp-auth                    gcp-auth-d4c87556c-kr5wf                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-5dd5756b68-nbgjt                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m44s
	  kube-system                 etcd-addons-916083                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m57s
	  kube-system                 kindnet-6r7md                                                 100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m44s
	  kube-system                 kube-apiserver-addons-916083                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                 kube-controller-manager-addons-916083                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-proxy-fs7hg                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-scheduler-addons-916083                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  local-path-storage          helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         1s
	  local-path-storage          local-path-provisioner-78b46b4d5c-ddfls                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-hqlhr                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m41s                kube-proxy       
	  Normal  Starting                 3m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m4s (x8 over 3m4s)  kubelet          Node addons-916083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x8 over 3m4s)  kubelet          Node addons-916083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x7 over 3m4s)  kubelet          Node addons-916083 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m57s                kubelet          Node addons-916083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s                kubelet          Node addons-916083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s                kubelet          Node addons-916083 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m57s                kubelet          Node addons-916083 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m47s                kubelet          Node addons-916083 status is now: NodeReady
	  Normal  RegisteredNode           2m44s                node-controller  Node addons-916083 event: Registered Node addons-916083 in Controller
	
	
	==> dmesg <==
	[  +0.000805] FS-Cache: N-cookie c=000000c0 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=000000006c2f7aa3
	[  +0.001155] FS-Cache: N-key=[8] '51e2c90000000000'
	[  +0.002800] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=000000ba [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001068] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=00000000acbde6cc
	[  +0.001215] FS-Cache: O-key=[8] '51e2c90000000000'
	[  +0.000760] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.001003] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=00000000f4035d4d
	[  +0.001087] FS-Cache: N-key=[8] '51e2c90000000000'
	[  +2.762848] FS-Cache: Duplicate cookie detected
	[  +0.000831] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.001117] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=000000002f94bec5
	[  +0.001162] FS-Cache: O-key=[8] '50e2c90000000000'
	[  +0.000725] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=0000000075bebf78
	[  +0.001135] FS-Cache: N-key=[8] '50e2c90000000000'
	[  +0.389294] FS-Cache: Duplicate cookie detected
	[  +0.000778] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000969] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=00000000a105c0ad
	[  +0.001207] FS-Cache: O-key=[8] '56e2c90000000000'
	[  +0.000807] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000991] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=000000006c2f7aa3
	[  +0.001031] FS-Cache: N-key=[8] '56e2c90000000000'
	[Jan15 13:23] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [1cc454d90602ac16e878f37a2d7ebae4f134e48bae58cae67ef416f119481c87] <==
	{"level":"info","ts":"2024-01-15T14:02:09.835495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-01-15T14:02:09.835576Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-01-15T14:02:09.8371Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-15T14:02:09.837268Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-15T14:02:09.837288Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-01-15T14:02:09.837826Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-15T14:02:09.837852Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-15T14:02:10.823293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-15T14:02:10.823508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-15T14:02:10.823619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-01-15T14:02:10.823735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-01-15T14:02:10.823817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-15T14:02:10.823893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-01-15T14:02:10.82397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-01-15T14:02:10.82743Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-916083 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-15T14:02:10.827615Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T14:02:10.8288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-15T14:02:10.829087Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T14:02:10.82944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-15T14:02:10.863327Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-15T14:02:10.86354Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-15T14:02:10.864822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-01-15T14:02:10.878266Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T14:02:10.883305Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-15T14:02:10.883511Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [cd8ec42b386f28ca1f8c83eafc5bb75daf9c684154eb941f88b132116d20e226] <==
	2024/01/15 14:03:49 GCP Auth Webhook started!
	2024/01/15 14:04:02 Ready to marshal response ...
	2024/01/15 14:04:02 Ready to write response ...
	2024/01/15 14:04:12 Ready to marshal response ...
	2024/01/15 14:04:12 Ready to write response ...
	2024/01/15 14:04:25 Ready to marshal response ...
	2024/01/15 14:04:25 Ready to write response ...
	2024/01/15 14:04:34 Ready to marshal response ...
	2024/01/15 14:04:34 Ready to write response ...
	2024/01/15 14:04:36 Ready to marshal response ...
	2024/01/15 14:04:36 Ready to write response ...
	2024/01/15 14:05:02 Ready to marshal response ...
	2024/01/15 14:05:02 Ready to write response ...
	2024/01/15 14:05:02 Ready to marshal response ...
	2024/01/15 14:05:02 Ready to write response ...
	2024/01/15 14:05:12 Ready to marshal response ...
	2024/01/15 14:05:12 Ready to write response ...
	
	
	==> kernel <==
	 14:05:13 up 18:47,  0 users,  load average: 2.29, 2.14, 2.55
	Linux addons-916083 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [cc48c18dbdb992baa03f2db0baf011ff6f64981f40d1f120d8d450e3513ae2d5] <==
	I0115 14:03:11.479914       1 main.go:227] handling current node
	I0115 14:03:21.491145       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:21.491171       1 main.go:227] handling current node
	I0115 14:03:31.503599       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:31.503628       1 main.go:227] handling current node
	I0115 14:03:41.507886       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:41.507914       1 main.go:227] handling current node
	I0115 14:03:51.519401       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:03:51.519430       1 main.go:227] handling current node
	I0115 14:04:01.530196       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:01.530226       1 main.go:227] handling current node
	I0115 14:04:11.539468       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:11.539524       1 main.go:227] handling current node
	I0115 14:04:21.544000       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:21.544033       1 main.go:227] handling current node
	I0115 14:04:31.555402       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:31.555610       1 main.go:227] handling current node
	I0115 14:04:41.560425       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:41.560453       1 main.go:227] handling current node
	I0115 14:04:51.566620       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:04:51.566648       1 main.go:227] handling current node
	I0115 14:05:01.571759       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:05:01.571792       1 main.go:227] handling current node
	I0115 14:05:11.588353       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:05:11.588379       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4b4fb5cb9a74fdfee7be722e3f253218e863ea1d5dc44b9177095caed4a158e2] <==
	E0115 14:04:23.632114       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I0115 14:04:24.807592       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0115 14:04:25.327164       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.200.115"}
	E0115 14:04:33.632597       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	I0115 14:04:35.096410       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.50.60"}
	E0115 14:04:43.633049       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	I0115 14:04:52.779983       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.780030       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.803381       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.803428       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.816094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.816145       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.861543       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.861586       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.867731       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.867783       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.884977       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.885036       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0115 14:04:52.906347       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0115 14:04:52.907455       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0115 14:04:53.633613       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	W0115 14:04:53.862117       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0115 14:04:53.908583       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0115 14:04:53.914185       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0115 14:05:03.634032       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [03bb27f5cf55e47253b31ebd97175a5e43bb2f02bef5293212d5db8853b9a511] <==
	E0115 14:04:57.244938       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:04:57.713800       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:04:57.713832       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 14:04:59.312089       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0115 14:04:59.312133       1 shared_informer.go:318] Caches are synced for resource quota
	I0115 14:04:59.660651       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0115 14:04:59.660694       1 shared_informer.go:318] Caches are synced for garbage collector
	W0115 14:05:01.845802       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:05:01.845836       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 14:05:02.507947       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0115 14:05:02.607888       1 namespace_controller.go:182] "Namespace has been deleted" namespace="ingress-nginx"
	I0115 14:05:02.690743       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0115 14:05:02.959081       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:05:02.959122       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:05:03.546832       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:05:03.546865       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:05:03.804221       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:05:03.804253       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0115 14:05:06.529426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.945µs"
	W0115 14:05:10.308287       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:05:10.308327       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:05:11.823655       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:05:11.823687       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0115 14:05:12.714959       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0115 14:05:12.714997       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [631d60ffe727cb63aaea4212d8ec338271a8310d314d3a5e7c2720cb7a1c338f] <==
	I0115 14:02:31.586464       1 server_others.go:69] "Using iptables proxy"
	I0115 14:02:31.603709       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0115 14:02:31.686387       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0115 14:02:31.688604       1 server_others.go:152] "Using iptables Proxier"
	I0115 14:02:31.688645       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0115 14:02:31.688654       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0115 14:02:31.688713       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0115 14:02:31.688963       1 server.go:846] "Version info" version="v1.28.4"
	I0115 14:02:31.688980       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0115 14:02:31.689977       1 config.go:188] "Starting service config controller"
	I0115 14:02:31.699582       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0115 14:02:31.690700       1 config.go:97] "Starting endpoint slice config controller"
	I0115 14:02:31.691309       1 config.go:315] "Starting node config controller"
	I0115 14:02:31.700969       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0115 14:02:31.702224       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0115 14:02:31.702246       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0115 14:02:31.702262       1 shared_informer.go:318] Caches are synced for service config
	I0115 14:02:31.802186       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ab42205eeee750d8578778a065fec6c53560a1398c6b6ae117de30bae5ea2d90] <==
	W0115 14:02:13.970152       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0115 14:02:13.970169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0115 14:02:13.970236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 14:02:13.970250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0115 14:02:13.970330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 14:02:13.970355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0115 14:02:13.970397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 14:02:13.970413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0115 14:02:13.970463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 14:02:13.970478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0115 14:02:13.970529       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 14:02:13.970543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0115 14:02:13.970594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 14:02:13.970613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0115 14:02:13.970666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 14:02:13.970681       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0115 14:02:13.970716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 14:02:13.970760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0115 14:02:13.970810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 14:02:13.970838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0115 14:02:13.970978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 14:02:13.971003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0115 14:02:13.971034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0115 14:02:13.971050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0115 14:02:15.058493       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 15 14:05:07 addons-916083 kubelet[1339]: I0115 14:05:07.696748    1339 memory_manager.go:346] "RemoveStaleState removing state" podUID="1d3ccb2a-9321-4db8-9a68-e057ecf90056" containerName="helper-pod"
	Jan 15 14:05:07 addons-916083 kubelet[1339]: I0115 14:05:07.696833    1339 memory_manager.go:346] "RemoveStaleState removing state" podUID="2106a856-cb65-4ce7-84ae-6bc223f27497" containerName="minikube-ingress-dns"
	Jan 15 14:05:07 addons-916083 kubelet[1339]: I0115 14:05:07.696904    1339 memory_manager.go:346] "RemoveStaleState removing state" podUID="10888201-3bd5-457a-aa04-7bc6a2d2dc6a" containerName="nvidia-device-plugin-ctr"
	Jan 15 14:05:07 addons-916083 kubelet[1339]: I0115 14:05:07.783145    1339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs22n\" (UniqueName: \"kubernetes.io/projected/1d543c57-f8ba-4c5e-ab39-b29e99623a36-kube-api-access-rs22n\") pod \"test-local-path\" (UID: \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\") " pod="default/test-local-path"
	Jan 15 14:05:07 addons-916083 kubelet[1339]: I0115 14:05:07.784170    1339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\" (UniqueName: \"kubernetes.io/host-path/1d543c57-f8ba-4c5e-ab39-b29e99623a36-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\") pod \"test-local-path\" (UID: \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\") " pod="default/test-local-path"
	Jan 15 14:05:07 addons-916083 kubelet[1339]: I0115 14:05:07.784293    1339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1d543c57-f8ba-4c5e-ab39-b29e99623a36-gcp-creds\") pod \"test-local-path\" (UID: \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\") " pod="default/test-local-path"
	Jan 15 14:05:08 addons-916083 kubelet[1339]: I0115 14:05:08.514018    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1d3ccb2a-9321-4db8-9a68-e057ecf90056" path="/var/lib/kubelet/pods/1d3ccb2a-9321-4db8-9a68-e057ecf90056/volumes"
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.109419    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs22n\" (UniqueName: \"kubernetes.io/projected/1d543c57-f8ba-4c5e-ab39-b29e99623a36-kube-api-access-rs22n\") pod \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\" (UID: \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\") "
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.109478    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1d543c57-f8ba-4c5e-ab39-b29e99623a36-gcp-creds\") pod \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\" (UID: \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\") "
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.109511    1339 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/1d543c57-f8ba-4c5e-ab39-b29e99623a36-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\") pod \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\" (UID: \"1d543c57-f8ba-4c5e-ab39-b29e99623a36\") "
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.109599    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d543c57-f8ba-4c5e-ab39-b29e99623a36-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050" (OuterVolumeSpecName: "data") pod "1d543c57-f8ba-4c5e-ab39-b29e99623a36" (UID: "1d543c57-f8ba-4c5e-ab39-b29e99623a36"). InnerVolumeSpecName "pvc-ebcda5df-1519-4ca3-8350-f3873dc95050". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.109629    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d543c57-f8ba-4c5e-ab39-b29e99623a36-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1d543c57-f8ba-4c5e-ab39-b29e99623a36" (UID: "1d543c57-f8ba-4c5e-ab39-b29e99623a36"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.114346    1339 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d543c57-f8ba-4c5e-ab39-b29e99623a36-kube-api-access-rs22n" (OuterVolumeSpecName: "kube-api-access-rs22n") pod "1d543c57-f8ba-4c5e-ab39-b29e99623a36" (UID: "1d543c57-f8ba-4c5e-ab39-b29e99623a36"). InnerVolumeSpecName "kube-api-access-rs22n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.210124    1339 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rs22n\" (UniqueName: \"kubernetes.io/projected/1d543c57-f8ba-4c5e-ab39-b29e99623a36-kube-api-access-rs22n\") on node \"addons-916083\" DevicePath \"\""
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.210165    1339 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1d543c57-f8ba-4c5e-ab39-b29e99623a36-gcp-creds\") on node \"addons-916083\" DevicePath \"\""
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.210182    1339 reconciler_common.go:300] "Volume detached for volume \"pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\" (UniqueName: \"kubernetes.io/host-path/1d543c57-f8ba-4c5e-ab39-b29e99623a36-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\") on node \"addons-916083\" DevicePath \"\""
	Jan 15 14:05:11 addons-916083 kubelet[1339]: I0115 14:05:11.932290    1339 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3cf96895c850cc6ff6d09442fed7d9b673026852ee91bbb927c05ffd666d5ba"
	Jan 15 14:05:12 addons-916083 kubelet[1339]: I0115 14:05:12.516627    1339 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1d543c57-f8ba-4c5e-ab39-b29e99623a36" path="/var/lib/kubelet/pods/1d543c57-f8ba-4c5e-ab39-b29e99623a36/volumes"
	Jan 15 14:05:12 addons-916083 kubelet[1339]: I0115 14:05:12.647870    1339 topology_manager.go:215] "Topology Admit Handler" podUID="37d75c2f-cacc-4bcf-bcfc-9f62c36c5594" podNamespace="local-path-storage" podName="helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050"
	Jan 15 14:05:12 addons-916083 kubelet[1339]: E0115 14:05:12.647950    1339 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d543c57-f8ba-4c5e-ab39-b29e99623a36" containerName="busybox"
	Jan 15 14:05:12 addons-916083 kubelet[1339]: I0115 14:05:12.647991    1339 memory_manager.go:346] "RemoveStaleState removing state" podUID="1d543c57-f8ba-4c5e-ab39-b29e99623a36" containerName="busybox"
	Jan 15 14:05:12 addons-916083 kubelet[1339]: I0115 14:05:12.727340    1339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z25r\" (UniqueName: \"kubernetes.io/projected/37d75c2f-cacc-4bcf-bcfc-9f62c36c5594-kube-api-access-7z25r\") pod \"helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\" (UID: \"37d75c2f-cacc-4bcf-bcfc-9f62c36c5594\") " pod="local-path-storage/helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050"
	Jan 15 14:05:12 addons-916083 kubelet[1339]: I0115 14:05:12.727694    1339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/37d75c2f-cacc-4bcf-bcfc-9f62c36c5594-data\") pod \"helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\" (UID: \"37d75c2f-cacc-4bcf-bcfc-9f62c36c5594\") " pod="local-path-storage/helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050"
	Jan 15 14:05:12 addons-916083 kubelet[1339]: I0115 14:05:12.727839    1339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/37d75c2f-cacc-4bcf-bcfc-9f62c36c5594-script\") pod \"helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\" (UID: \"37d75c2f-cacc-4bcf-bcfc-9f62c36c5594\") " pod="local-path-storage/helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050"
	Jan 15 14:05:12 addons-916083 kubelet[1339]: I0115 14:05:12.728010    1339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/37d75c2f-cacc-4bcf-bcfc-9f62c36c5594-gcp-creds\") pod \"helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050\" (UID: \"37d75c2f-cacc-4bcf-bcfc-9f62c36c5594\") " pod="local-path-storage/helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050"
	
	
	==> storage-provisioner [8702bfee57bb9e8e04569ec57888559b3ea0d29b0a2af00f5b96c1b8921d474a] <==
	I0115 14:02:35.760716       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 14:02:35.850992       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 14:02:35.851072       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 14:02:35.886709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 14:02:35.886891       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-916083_afbf9aca-6a2f-4286-b109-b9e57a45b1e6!
	I0115 14:02:35.897886       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6fd94479-8d93-428e-bb75-f1c93fc214d4", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-916083_afbf9aca-6a2f-4286-b109-b9e57a45b1e6 became leader
	I0115 14:02:35.987030       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-916083_afbf9aca-6a2f-4286-b109-b9e57a45b1e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-916083 -n addons-916083
helpers_test.go:261: (dbg) Run:  kubectl --context addons-916083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-916083 describe pod helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-916083 describe pod helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050: exit status 1 (93.292912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-916083 describe pod helper-pod-delete-pvc-ebcda5df-1519-4ca3-8350-f3873dc95050: exit status 1
--- FAIL: TestAddons/parallel/CloudSpanner (9.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image load --daemon gcr.io/google-containers/addon-resizer:functional-672946 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 image load --daemon gcr.io/google-containers/addon-resizer:functional-672946 --alsologtostderr: (4.118171388s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-672946" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image load --daemon gcr.io/google-containers/addon-resizer:functional-672946 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 image load --daemon gcr.io/google-containers/addon-resizer:functional-672946 --alsologtostderr: (3.319391844s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-672946" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.523141526s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-672946
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image load --daemon gcr.io/google-containers/addon-resizer:functional-672946 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 image load --daemon gcr.io/google-containers/addon-resizer:functional-672946 --alsologtostderr: (3.150532105s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-672946" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image save gcr.io/google-containers/addon-resizer:functional-672946 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0115 14:11:09.561685 4035139 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:11:09.561907 4035139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:11:09.561920 4035139 out.go:309] Setting ErrFile to fd 2...
	I0115 14:11:09.561926 4035139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:11:09.562261 4035139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:11:09.562974 4035139 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:11:09.563163 4035139 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:11:09.563689 4035139 cli_runner.go:164] Run: docker container inspect functional-672946 --format={{.State.Status}}
	I0115 14:11:09.585480 4035139 ssh_runner.go:195] Run: systemctl --version
	I0115 14:11:09.585625 4035139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-672946
	I0115 14:11:09.603561 4035139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36454 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/functional-672946/id_rsa Username:docker}
	I0115 14:11:09.697126 4035139 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0115 14:11:09.697217 4035139 cache_images.go:254] Failed to load cached images for profile functional-672946. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0115 14:11:09.697243 4035139 cache_images.go:262] succeeded pushing to: 
	I0115 14:11:09.697250 4035139 cache_images.go:263] failed pushing to: functional-672946

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (49.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-062316 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-062316 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.930509359s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-062316 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-062316 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [58b42cde-2d8a-4825-81a2-a04535945745] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [58b42cde-2d8a-4825-81a2-a04535945745] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.003424255s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-062316 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-062316 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-062316 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.020210683s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-062316 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-062316 addons disable ingress-dns --alsologtostderr -v=1: (3.349728342s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-062316 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-062316 addons disable ingress --alsologtostderr -v=1: (7.558688272s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-062316
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-062316:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "800e2c1fc9e91679eed9bc17983759adfd20e2280003184789fa19e288a30b1d",
	        "Created": "2024-01-15T14:11:37.852458992Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 4036281,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-01-15T14:11:38.19037343Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/800e2c1fc9e91679eed9bc17983759adfd20e2280003184789fa19e288a30b1d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/800e2c1fc9e91679eed9bc17983759adfd20e2280003184789fa19e288a30b1d/hostname",
	        "HostsPath": "/var/lib/docker/containers/800e2c1fc9e91679eed9bc17983759adfd20e2280003184789fa19e288a30b1d/hosts",
	        "LogPath": "/var/lib/docker/containers/800e2c1fc9e91679eed9bc17983759adfd20e2280003184789fa19e288a30b1d/800e2c1fc9e91679eed9bc17983759adfd20e2280003184789fa19e288a30b1d-json.log",
	        "Name": "/ingress-addon-legacy-062316",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-062316:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-062316",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bfeaa09b86066f18f2187cae13b4835edb9ac0aa32910945fbfeeaa11cec1495-init/diff:/var/lib/docker/overlay2/37735672df261a15b7a2ba1989e6f3a0906a58ecd248d26a2bc61e23d88a15c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bfeaa09b86066f18f2187cae13b4835edb9ac0aa32910945fbfeeaa11cec1495/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bfeaa09b86066f18f2187cae13b4835edb9ac0aa32910945fbfeeaa11cec1495/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bfeaa09b86066f18f2187cae13b4835edb9ac0aa32910945fbfeeaa11cec1495/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-062316",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-062316/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-062316",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-062316",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-062316",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "175b7c38848ca008d3f60f2516c2cfffa17d796ccd4251d60ce3ae7cb3caf6c7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36459"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36458"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36457"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36456"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/175b7c38848c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-062316": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "800e2c1fc9e9",
	                        "ingress-addon-legacy-062316"
	                    ],
	                    "NetworkID": "fc2846e95f7eff2499a07db38029482a936a9c2e80de286bf0ea820611657247",
	                    "EndpointID": "2c50d689bbb990efd6f8e0a113be579db0fb052e2409c17cc59603b5d8b7faac",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-062316 -n ingress-addon-legacy-062316
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-062316 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-062316 logs -n 25: (1.389415587s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-672946 image ls                                                   | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:10 UTC | 15 Jan 24 14:10 UTC |
	| image   | functional-672946 image load --daemon                                        | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:10 UTC | 15 Jan 24 14:11 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-672946                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946 image ls                                                   | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	| image   | functional-672946 image load --daemon                                        | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-672946                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946 image ls                                                   | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	| image   | functional-672946 image save                                                 | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-672946                     |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946 image rm                                                   | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-672946                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946 image ls                                                   | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	| image   | functional-672946 image load                                                 | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946 image save --daemon                                        | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-672946                     |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946                                                            | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | image ls --format short                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946                                                            | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946                                                            | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | image ls --format json                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image   | functional-672946                                                            | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | image ls --format table                                                      |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh     | functional-672946 ssh pgrep                                                  | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC |                     |
	|         | buildkitd                                                                    |                             |         |         |                     |                     |
	| image   | functional-672946 image build -t                                             | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	|         | localhost/my-image:functional-672946                                         |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image   | functional-672946 image ls                                                   | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	| delete  | -p functional-672946                                                         | functional-672946           | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:11 UTC |
	| start   | -p ingress-addon-legacy-062316                                               | ingress-addon-legacy-062316 | jenkins | v1.32.0 | 15 Jan 24 14:11 UTC | 15 Jan 24 14:12 UTC |
	|         | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=containerd                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-062316                                                  | ingress-addon-legacy-062316 | jenkins | v1.32.0 | 15 Jan 24 14:12 UTC | 15 Jan 24 14:12 UTC |
	|         | addons enable ingress                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-062316                                                  | ingress-addon-legacy-062316 | jenkins | v1.32.0 | 15 Jan 24 14:12 UTC | 15 Jan 24 14:12 UTC |
	|         | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-062316                                                  | ingress-addon-legacy-062316 | jenkins | v1.32.0 | 15 Jan 24 14:13 UTC | 15 Jan 24 14:13 UTC |
	|         | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-062316 ip                                               | ingress-addon-legacy-062316 | jenkins | v1.32.0 | 15 Jan 24 14:13 UTC | 15 Jan 24 14:13 UTC |
	| addons  | ingress-addon-legacy-062316                                                  | ingress-addon-legacy-062316 | jenkins | v1.32.0 | 15 Jan 24 14:13 UTC | 15 Jan 24 14:13 UTC |
	|         | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-062316                                                  | ingress-addon-legacy-062316 | jenkins | v1.32.0 | 15 Jan 24 14:13 UTC | 15 Jan 24 14:13 UTC |
	|         | addons disable ingress                                                       |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 14:11:16
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 14:11:16.623411 4035830 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:11:16.623639 4035830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:11:16.623665 4035830 out.go:309] Setting ErrFile to fd 2...
	I0115 14:11:16.623684 4035830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:11:16.623956 4035830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:11:16.624485 4035830 out.go:303] Setting JSON to false
	I0115 14:11:16.625352 4035830 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":68020,"bootTime":1705259857,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:11:16.625447 4035830 start.go:138] virtualization:  
	I0115 14:11:16.628410 4035830 out.go:177] * [ingress-addon-legacy-062316] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 14:11:16.631341 4035830 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 14:11:16.631495 4035830 notify.go:220] Checking for updates...
	I0115 14:11:16.635077 4035830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:11:16.637223 4035830 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:11:16.638969 4035830 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:11:16.641484 4035830 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 14:11:16.643650 4035830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 14:11:16.645848 4035830 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:11:16.669109 4035830 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:11:16.669257 4035830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:11:16.749433 4035830 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-15 14:11:16.740001863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:11:16.749537 4035830 docker.go:295] overlay module found
	I0115 14:11:16.752055 4035830 out.go:177] * Using the docker driver based on user configuration
	I0115 14:11:16.754160 4035830 start.go:298] selected driver: docker
	I0115 14:11:16.754176 4035830 start.go:902] validating driver "docker" against <nil>
	I0115 14:11:16.754187 4035830 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 14:11:16.754771 4035830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:11:16.820440 4035830 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2024-01-15 14:11:16.810829616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:11:16.820606 4035830 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 14:11:16.820850 4035830 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0115 14:11:16.823106 4035830 out.go:177] * Using Docker driver with root privileges
	I0115 14:11:16.824854 4035830 cni.go:84] Creating CNI manager for ""
	I0115 14:11:16.824872 4035830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:11:16.824885 4035830 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 14:11:16.824897 4035830 start_flags.go:321] config:
	{Name:ingress-addon-legacy-062316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-062316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:11:16.828433 4035830 out.go:177] * Starting control plane node ingress-addon-legacy-062316 in cluster ingress-addon-legacy-062316
	I0115 14:11:16.830374 4035830 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0115 14:11:16.832403 4035830 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0115 14:11:16.834595 4035830 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0115 14:11:16.834622 4035830 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 14:11:16.851201 4035830 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0115 14:11:16.851226 4035830 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0115 14:11:16.976426 4035830 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0115 14:11:16.976458 4035830 cache.go:56] Caching tarball of preloaded images
	I0115 14:11:16.976631 4035830 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0115 14:11:16.979066 4035830 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0115 14:11:16.980737 4035830 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0115 14:11:17.087344 4035830 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I0115 14:11:30.052495 4035830 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0115 14:11:30.052605 4035830 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I0115 14:11:31.243662 4035830 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I0115 14:11:31.244033 4035830 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/config.json ...
	I0115 14:11:31.244064 4035830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/config.json: {Name:mkdbd45560c02f1279bbdcc45f9b1473005dc014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:11:31.244264 4035830 cache.go:194] Successfully downloaded all kic artifacts
	I0115 14:11:31.244325 4035830 start.go:365] acquiring machines lock for ingress-addon-legacy-062316: {Name:mk36a018528f81688458e619262c6755222cb4cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0115 14:11:31.244387 4035830 start.go:369] acquired machines lock for "ingress-addon-legacy-062316" in 46.825µs
	I0115 14:11:31.244411 4035830 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-062316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-062316 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 14:11:31.244486 4035830 start.go:125] createHost starting for "" (driver="docker")
	I0115 14:11:31.247279 4035830 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0115 14:11:31.247508 4035830 start.go:159] libmachine.API.Create for "ingress-addon-legacy-062316" (driver="docker")
	I0115 14:11:31.247534 4035830 client.go:168] LocalClient.Create starting
	I0115 14:11:31.247636 4035830 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem
	I0115 14:11:31.247672 4035830 main.go:141] libmachine: Decoding PEM data...
	I0115 14:11:31.247691 4035830 main.go:141] libmachine: Parsing certificate...
	I0115 14:11:31.247752 4035830 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem
	I0115 14:11:31.247775 4035830 main.go:141] libmachine: Decoding PEM data...
	I0115 14:11:31.247790 4035830 main.go:141] libmachine: Parsing certificate...
	I0115 14:11:31.248153 4035830 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-062316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0115 14:11:31.264681 4035830 cli_runner.go:211] docker network inspect ingress-addon-legacy-062316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0115 14:11:31.264762 4035830 network_create.go:281] running [docker network inspect ingress-addon-legacy-062316] to gather additional debugging logs...
	I0115 14:11:31.264783 4035830 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-062316
	W0115 14:11:31.280594 4035830 cli_runner.go:211] docker network inspect ingress-addon-legacy-062316 returned with exit code 1
	I0115 14:11:31.280631 4035830 network_create.go:284] error running [docker network inspect ingress-addon-legacy-062316]: docker network inspect ingress-addon-legacy-062316: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-062316 not found
	I0115 14:11:31.280646 4035830 network_create.go:286] output of [docker network inspect ingress-addon-legacy-062316]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-062316 not found
	
	** /stderr **
	I0115 14:11:31.280761 4035830 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 14:11:31.297789 4035830 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004cdb50}
	I0115 14:11:31.297833 4035830 network_create.go:124] attempt to create docker network ingress-addon-legacy-062316 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0115 14:11:31.297893 4035830 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-062316 ingress-addon-legacy-062316
	I0115 14:11:31.366760 4035830 network_create.go:108] docker network ingress-addon-legacy-062316 192.168.49.0/24 created
	I0115 14:11:31.366791 4035830 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-062316" container
	I0115 14:11:31.366862 4035830 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0115 14:11:31.383869 4035830 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-062316 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-062316 --label created_by.minikube.sigs.k8s.io=true
	I0115 14:11:31.402337 4035830 oci.go:103] Successfully created a docker volume ingress-addon-legacy-062316
	I0115 14:11:31.402441 4035830 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-062316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-062316 --entrypoint /usr/bin/test -v ingress-addon-legacy-062316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0115 14:11:32.918930 4035830 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-062316-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-062316 --entrypoint /usr/bin/test -v ingress-addon-legacy-062316:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.516451158s)
	I0115 14:11:32.918964 4035830 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-062316
	I0115 14:11:32.918998 4035830 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0115 14:11:32.919020 4035830 kic.go:194] Starting extracting preloaded images to volume ...
	I0115 14:11:32.919106 4035830 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-062316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0115 14:11:37.769420 4035830 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-062316:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.850273185s)
	I0115 14:11:37.769453 4035830 kic.go:203] duration metric: took 4.850431 seconds to extract preloaded images to volume
	W0115 14:11:37.769596 4035830 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0115 14:11:37.769713 4035830 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0115 14:11:37.837027 4035830 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-062316 --name ingress-addon-legacy-062316 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-062316 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-062316 --network ingress-addon-legacy-062316 --ip 192.168.49.2 --volume ingress-addon-legacy-062316:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0115 14:11:38.200435 4035830 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-062316 --format={{.State.Running}}
	I0115 14:11:38.222721 4035830 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-062316 --format={{.State.Status}}
	I0115 14:11:38.250816 4035830 cli_runner.go:164] Run: docker exec ingress-addon-legacy-062316 stat /var/lib/dpkg/alternatives/iptables
	I0115 14:11:38.320267 4035830 oci.go:144] the created container "ingress-addon-legacy-062316" has a running status.
	I0115 14:11:38.320292 4035830 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa...
	I0115 14:11:39.135074 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0115 14:11:39.135133 4035830 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0115 14:11:39.168022 4035830 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-062316 --format={{.State.Status}}
	I0115 14:11:39.194562 4035830 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0115 14:11:39.194582 4035830 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-062316 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0115 14:11:39.260708 4035830 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-062316 --format={{.State.Status}}
	I0115 14:11:39.279244 4035830 machine.go:88] provisioning docker machine ...
	I0115 14:11:39.279273 4035830 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-062316"
	I0115 14:11:39.279335 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:11:39.305370 4035830 main.go:141] libmachine: Using SSH client type: native
	I0115 14:11:39.305948 4035830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 36459 <nil> <nil>}
	I0115 14:11:39.305969 4035830 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-062316 && echo "ingress-addon-legacy-062316" | sudo tee /etc/hostname
	I0115 14:11:39.467526 4035830 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-062316
	
	I0115 14:11:39.467625 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:11:39.489810 4035830 main.go:141] libmachine: Using SSH client type: native
	I0115 14:11:39.490256 4035830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bfa60] 0x3c21d0 <nil>  [] 0s} 127.0.0.1 36459 <nil> <nil>}
	I0115 14:11:39.490291 4035830 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-062316' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-062316/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-062316' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0115 14:11:39.628509 4035830 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0115 14:11:39.628535 4035830 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17957-3996034/.minikube CaCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17957-3996034/.minikube}
	I0115 14:11:39.628561 4035830 ubuntu.go:177] setting up certificates
	I0115 14:11:39.628572 4035830 provision.go:83] configureAuth start
	I0115 14:11:39.628633 4035830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-062316
	I0115 14:11:39.649650 4035830 provision.go:138] copyHostCerts
	I0115 14:11:39.649701 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.pem
	I0115 14:11:39.649737 4035830 exec_runner.go:144] found /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.pem, removing ...
	I0115 14:11:39.649747 4035830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.pem
	I0115 14:11:39.649824 4035830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.pem (1082 bytes)
	I0115 14:11:39.649901 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17957-3996034/.minikube/cert.pem
	I0115 14:11:39.649922 4035830 exec_runner.go:144] found /home/jenkins/minikube-integration/17957-3996034/.minikube/cert.pem, removing ...
	I0115 14:11:39.649931 4035830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17957-3996034/.minikube/cert.pem
	I0115 14:11:39.649958 4035830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/cert.pem (1123 bytes)
	I0115 14:11:39.649999 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17957-3996034/.minikube/key.pem
	I0115 14:11:39.650019 4035830 exec_runner.go:144] found /home/jenkins/minikube-integration/17957-3996034/.minikube/key.pem, removing ...
	I0115 14:11:39.650026 4035830 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17957-3996034/.minikube/key.pem
	I0115 14:11:39.650055 4035830 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17957-3996034/.minikube/key.pem (1679 bytes)
	I0115 14:11:39.650125 4035830 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-062316 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-062316]
	I0115 14:11:40.920322 4035830 provision.go:172] copyRemoteCerts
	I0115 14:11:40.920416 4035830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0115 14:11:40.920462 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:11:40.937815 4035830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36459 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa Username:docker}
	I0115 14:11:41.037740 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0115 14:11:41.037801 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0115 14:11:41.065725 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0115 14:11:41.065798 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0115 14:11:41.093785 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0115 14:11:41.093851 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0115 14:11:41.121600 4035830 provision.go:86] duration metric: configureAuth took 1.493014244s
	I0115 14:11:41.121670 4035830 ubuntu.go:193] setting minikube options for container-runtime
	I0115 14:11:41.121888 4035830 config.go:182] Loaded profile config "ingress-addon-legacy-062316": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0115 14:11:41.121901 4035830 machine.go:91] provisioned docker machine in 1.842639689s
	I0115 14:11:41.121908 4035830 client.go:171] LocalClient.Create took 9.874368356s
	I0115 14:11:41.121927 4035830 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-062316" took 9.874419112s
	I0115 14:11:41.121939 4035830 start.go:300] post-start starting for "ingress-addon-legacy-062316" (driver="docker")
	I0115 14:11:41.121949 4035830 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0115 14:11:41.122007 4035830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0115 14:11:41.122053 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:11:41.142528 4035830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36459 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa Username:docker}
	I0115 14:11:41.246094 4035830 ssh_runner.go:195] Run: cat /etc/os-release
	I0115 14:11:41.249995 4035830 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0115 14:11:41.250031 4035830 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0115 14:11:41.250050 4035830 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0115 14:11:41.250060 4035830 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0115 14:11:41.250071 4035830 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-3996034/.minikube/addons for local assets ...
	I0115 14:11:41.250129 4035830 filesync.go:126] Scanning /home/jenkins/minikube-integration/17957-3996034/.minikube/files for local assets ...
	I0115 14:11:41.250217 4035830 filesync.go:149] local asset: /home/jenkins/minikube-integration/17957-3996034/.minikube/files/etc/ssl/certs/40013692.pem -> 40013692.pem in /etc/ssl/certs
	I0115 14:11:41.250229 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/files/etc/ssl/certs/40013692.pem -> /etc/ssl/certs/40013692.pem
	I0115 14:11:41.250341 4035830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0115 14:11:41.260235 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/files/etc/ssl/certs/40013692.pem --> /etc/ssl/certs/40013692.pem (1708 bytes)
	I0115 14:11:41.287911 4035830 start.go:303] post-start completed in 165.957516ms
	I0115 14:11:41.288340 4035830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-062316
	I0115 14:11:41.305481 4035830 profile.go:148] Saving config to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/config.json ...
	I0115 14:11:41.305763 4035830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 14:11:41.305823 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:11:41.322931 4035830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36459 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa Username:docker}
	I0115 14:11:41.421171 4035830 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0115 14:11:41.426735 4035830 start.go:128] duration metric: createHost completed in 10.182233079s
	I0115 14:11:41.426761 4035830 start.go:83] releasing machines lock for "ingress-addon-legacy-062316", held for 10.182360386s
	I0115 14:11:41.426840 4035830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-062316
	I0115 14:11:41.444090 4035830 ssh_runner.go:195] Run: cat /version.json
	I0115 14:11:41.444118 4035830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0115 14:11:41.444144 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:11:41.444187 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:11:41.468494 4035830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36459 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa Username:docker}
	I0115 14:11:41.468717 4035830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36459 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa Username:docker}
	I0115 14:11:41.694680 4035830 ssh_runner.go:195] Run: systemctl --version
	I0115 14:11:41.700315 4035830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0115 14:11:41.705741 4035830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0115 14:11:41.735706 4035830 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0115 14:11:41.735784 4035830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0115 14:11:41.768467 4035830 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0115 14:11:41.768495 4035830 start.go:475] detecting cgroup driver to use...
	I0115 14:11:41.768528 4035830 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0115 14:11:41.768583 4035830 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0115 14:11:41.783109 4035830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0115 14:11:41.796575 4035830 docker.go:217] disabling cri-docker service (if available) ...
	I0115 14:11:41.796662 4035830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0115 14:11:41.812652 4035830 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0115 14:11:41.829413 4035830 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0115 14:11:41.934848 4035830 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0115 14:11:42.050149 4035830 docker.go:233] disabling docker service ...
	I0115 14:11:42.050260 4035830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0115 14:11:42.073248 4035830 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0115 14:11:42.088576 4035830 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0115 14:11:42.189686 4035830 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0115 14:11:42.289127 4035830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0115 14:11:42.303696 4035830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0115 14:11:42.324449 4035830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0115 14:11:42.338083 4035830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0115 14:11:42.351707 4035830 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0115 14:11:42.351827 4035830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0115 14:11:42.364188 4035830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 14:11:42.376389 4035830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0115 14:11:42.388184 4035830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0115 14:11:42.400084 4035830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0115 14:11:42.412746 4035830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0115 14:11:42.425186 4035830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0115 14:11:42.435861 4035830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0115 14:11:42.446378 4035830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 14:11:42.544660 4035830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 14:11:42.662893 4035830 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0115 14:11:42.662972 4035830 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0115 14:11:42.667665 4035830 start.go:543] Will wait 60s for crictl version
	I0115 14:11:42.667729 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:42.671968 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0115 14:11:42.712467 4035830 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.26
	RuntimeApiVersion:  v1
	I0115 14:11:42.712544 4035830 ssh_runner.go:195] Run: containerd --version
	I0115 14:11:42.746663 4035830 ssh_runner.go:195] Run: containerd --version
	I0115 14:11:42.778455 4035830 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.26 ...
	I0115 14:11:42.780230 4035830 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-062316 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0115 14:11:42.797275 4035830 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0115 14:11:42.801649 4035830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 14:11:42.814716 4035830 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I0115 14:11:42.814793 4035830 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 14:11:42.853107 4035830 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0115 14:11:42.853183 4035830 ssh_runner.go:195] Run: which lz4
	I0115 14:11:42.857599 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0115 14:11:42.857700 4035830 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0115 14:11:42.861828 4035830 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0115 14:11:42.861861 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I0115 14:11:45.048339 4035830 containerd.go:548] Took 2.190679 seconds to copy over tarball
	I0115 14:11:45.048485 4035830 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0115 14:11:47.754373 4035830 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.705842868s)
	I0115 14:11:47.754399 4035830 containerd.go:555] Took 2.705984 seconds to extract the tarball
	I0115 14:11:47.754410 4035830 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0115 14:11:47.941413 4035830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0115 14:11:48.046146 4035830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0115 14:11:48.178270 4035830 ssh_runner.go:195] Run: sudo crictl images --output json
	I0115 14:11:48.222323 4035830 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0115 14:11:48.222349 4035830 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0115 14:11:48.222393 4035830 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 14:11:48.222419 4035830 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 14:11:48.222585 4035830 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0115 14:11:48.222610 4035830 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 14:11:48.222658 4035830 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0115 14:11:48.222697 4035830 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 14:11:48.222723 4035830 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0115 14:11:48.222763 4035830 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 14:11:48.224452 4035830 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0115 14:11:48.224516 4035830 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 14:11:48.224573 4035830 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0115 14:11:48.224750 4035830 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 14:11:48.224452 4035830 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0115 14:11:48.224896 4035830 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 14:11:48.224902 4035830 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 14:11:48.224970 4035830 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 14:11:48.528186 4035830 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c"
	I0115 14:11:48.528261 4035830 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0115 14:11:48.573200 4035830 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0115 14:11:48.573386 4035830 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257"
	I0115 14:11:48.573459 4035830 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0115 14:11:48.583468 4035830 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0115 14:11:48.583626 4035830 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7"
	I0115 14:11:48.583699 4035830 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0115 14:11:48.592215 4035830 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0115 14:11:48.592344 4035830 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c"
	I0115 14:11:48.592402 4035830 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0115 14:11:48.594462 4035830 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0115 14:11:48.594656 4035830 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18"
	I0115 14:11:48.594708 4035830 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0115 14:11:48.599746 4035830 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0115 14:11:48.599887 4035830 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79"
	I0115 14:11:48.599948 4035830 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0115 14:11:48.600156 4035830 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0115 14:11:48.600280 4035830 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03"
	I0115 14:11:48.600318 4035830 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	W0115 14:11:48.740667 4035830 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0115 14:11:48.740795 4035830 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I0115 14:11:48.740861 4035830 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0115 14:11:48.753784 4035830 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0115 14:11:48.753885 4035830 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0115 14:11:48.753966 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:49.057895 4035830 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0115 14:11:49.058476 4035830 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0115 14:11:49.058429 4035830 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0115 14:11:49.058539 4035830 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 14:11:49.058578 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:49.058667 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:49.107089 4035830 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0115 14:11:49.107176 4035830 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0115 14:11:49.107277 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:49.389049 4035830 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0115 14:11:49.390022 4035830 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0115 14:11:49.390096 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:49.393924 4035830 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0115 14:11:49.393963 4035830 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0115 14:11:49.394011 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:49.394081 4035830 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0115 14:11:49.394095 4035830 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0115 14:11:49.394117 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:49.397496 4035830 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0115 14:11:49.397541 4035830 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 14:11:49.397585 4035830 ssh_runner.go:195] Run: which crictl
	I0115 14:11:49.397652 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0115 14:11:49.397707 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0115 14:11:49.397755 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0115 14:11:49.397805 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0115 14:11:49.400340 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0115 14:11:49.411606 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0115 14:11:49.412928 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0115 14:11:49.562389 4035830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0115 14:11:49.562532 4035830 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 14:11:49.562637 4035830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0115 14:11:49.562695 4035830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0115 14:11:49.562761 4035830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0115 14:11:49.591993 4035830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0115 14:11:49.592094 4035830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0115 14:11:49.592148 4035830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0115 14:11:49.635174 4035830 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0115 14:11:49.635265 4035830 cache_images.go:92] LoadImages completed in 1.412901706s
	W0115 14:11:49.635340 4035830 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0115 14:11:49.635395 4035830 ssh_runner.go:195] Run: sudo crictl info
	I0115 14:11:49.675129 4035830 cni.go:84] Creating CNI manager for ""
	I0115 14:11:49.675154 4035830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:11:49.675183 4035830 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0115 14:11:49.675205 4035830 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-062316 NodeName:ingress-addon-legacy-062316 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0115 14:11:49.675364 4035830 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-062316"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0115 14:11:49.675433 4035830 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-062316 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-062316 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0115 14:11:49.675494 4035830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0115 14:11:49.685883 4035830 binaries.go:44] Found k8s binaries, skipping transfer
	I0115 14:11:49.685970 4035830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0115 14:11:49.696312 4035830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0115 14:11:49.716500 4035830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0115 14:11:49.737388 4035830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0115 14:11:49.758053 4035830 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0115 14:11:49.762545 4035830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0115 14:11:49.775770 4035830 certs.go:56] Setting up /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316 for IP: 192.168.49.2
	I0115 14:11:49.775807 4035830 certs.go:190] acquiring lock for shared ca certs: {Name:mk9e910b1d22df90feaffa3b68f77c94f902dcfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:11:49.775939 4035830 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key
	I0115 14:11:49.775992 4035830 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key
	I0115 14:11:49.776045 4035830 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.key
	I0115 14:11:49.776060 4035830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt with IP's: []
	I0115 14:11:50.108619 4035830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt ...
	I0115 14:11:50.108656 4035830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: {Name:mkeba8bb329de62a11324dd85cba5acd19c364ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:11:50.108861 4035830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.key ...
	I0115 14:11:50.108885 4035830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.key: {Name:mk461354c27a10a0155fd6734bb20569ccb6e4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:11:50.108984 4035830 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.key.dd3b5fb2
	I0115 14:11:50.109005 4035830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0115 14:11:50.772432 4035830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.crt.dd3b5fb2 ...
	I0115 14:11:50.772463 4035830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.crt.dd3b5fb2: {Name:mkcac6c8b5ea48d0d65a03759bf180d1864ebe1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:11:50.772651 4035830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.key.dd3b5fb2 ...
	I0115 14:11:50.772666 4035830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.key.dd3b5fb2: {Name:mkec5e113d80865e80d98aa66a783d4695317192 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:11:50.772749 4035830 certs.go:337] copying /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.crt
	I0115 14:11:50.772826 4035830 certs.go:341] copying /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.key
	I0115 14:11:50.772885 4035830 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.key
	I0115 14:11:50.772902 4035830 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.crt with IP's: []
	I0115 14:11:51.248349 4035830 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.crt ...
	I0115 14:11:51.248380 4035830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.crt: {Name:mkad78c752054bf19c62644b1df3cf80a23c1bb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:11:51.248560 4035830 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.key ...
	I0115 14:11:51.248574 4035830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.key: {Name:mk9f8f102d435a3897245afc7c79da183fd7b6e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:11:51.248657 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0115 14:11:51.248682 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0115 14:11:51.248695 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0115 14:11:51.248709 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0115 14:11:51.248720 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0115 14:11:51.248737 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0115 14:11:51.248753 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0115 14:11:51.248767 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0115 14:11:51.248816 4035830 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/4001369.pem (1338 bytes)
	W0115 14:11:51.248858 4035830 certs.go:433] ignoring /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/4001369_empty.pem, impossibly tiny 0 bytes
	I0115 14:11:51.248877 4035830 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca-key.pem (1675 bytes)
	I0115 14:11:51.248908 4035830 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/ca.pem (1082 bytes)
	I0115 14:11:51.248941 4035830 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/cert.pem (1123 bytes)
	I0115 14:11:51.248969 4035830 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/certs/key.pem (1679 bytes)
	I0115 14:11:51.249016 4035830 certs.go:437] found cert: /home/jenkins/minikube-integration/17957-3996034/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17957-3996034/.minikube/files/etc/ssl/certs/40013692.pem (1708 bytes)
	I0115 14:11:51.249052 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/4001369.pem -> /usr/share/ca-certificates/4001369.pem
	I0115 14:11:51.249069 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/files/etc/ssl/certs/40013692.pem -> /usr/share/ca-certificates/40013692.pem
	I0115 14:11:51.249084 4035830 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:11:51.249646 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0115 14:11:51.276526 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0115 14:11:51.304868 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0115 14:11:51.333074 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0115 14:11:51.362376 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0115 14:11:51.390776 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0115 14:11:51.419343 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0115 14:11:51.446658 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0115 14:11:51.474182 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/certs/4001369.pem --> /usr/share/ca-certificates/4001369.pem (1338 bytes)
	I0115 14:11:51.501949 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/files/etc/ssl/certs/40013692.pem --> /usr/share/ca-certificates/40013692.pem (1708 bytes)
	I0115 14:11:51.529477 4035830 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0115 14:11:51.557104 4035830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0115 14:11:51.577638 4035830 ssh_runner.go:195] Run: openssl version
	I0115 14:11:51.584645 4035830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4001369.pem && ln -fs /usr/share/ca-certificates/4001369.pem /etc/ssl/certs/4001369.pem"
	I0115 14:11:51.595761 4035830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4001369.pem
	I0115 14:11:51.600123 4035830 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 15 14:08 /usr/share/ca-certificates/4001369.pem
	I0115 14:11:51.600230 4035830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4001369.pem
	I0115 14:11:51.608547 4035830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4001369.pem /etc/ssl/certs/51391683.0"
	I0115 14:11:51.619857 4035830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/40013692.pem && ln -fs /usr/share/ca-certificates/40013692.pem /etc/ssl/certs/40013692.pem"
	I0115 14:11:51.631874 4035830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/40013692.pem
	I0115 14:11:51.636498 4035830 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 15 14:08 /usr/share/ca-certificates/40013692.pem
	I0115 14:11:51.636589 4035830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/40013692.pem
	I0115 14:11:51.645000 4035830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/40013692.pem /etc/ssl/certs/3ec20f2e.0"
	I0115 14:11:51.656607 4035830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0115 14:11:51.668176 4035830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:11:51.672592 4035830 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 15 14:01 /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:11:51.672659 4035830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0115 14:11:51.680800 4035830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0115 14:11:51.692056 4035830 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0115 14:11:51.696271 4035830 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0115 14:11:51.696328 4035830 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-062316 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-062316 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:11:51.696413 4035830 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0115 14:11:51.696469 4035830 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0115 14:11:51.740496 4035830 cri.go:89] found id: ""
	I0115 14:11:51.740599 4035830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0115 14:11:51.750972 4035830 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0115 14:11:51.761344 4035830 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0115 14:11:51.761412 4035830 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0115 14:11:51.771491 4035830 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0115 14:11:51.771546 4035830 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0115 14:11:51.823691 4035830 kubeadm.go:322] W0115 14:11:51.823195    1095 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0115 14:11:51.874049 4035830 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-aws\n", err: exit status 1
	I0115 14:11:51.963479 4035830 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0115 14:11:58.782501 4035830 kubeadm.go:322] W0115 14:11:58.777364    1095 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 14:11:58.782666 4035830 kubeadm.go:322] W0115 14:11:58.778566    1095 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0115 14:12:12.257593 4035830 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0115 14:12:12.257651 4035830 kubeadm.go:322] [preflight] Running pre-flight checks
	I0115 14:12:12.257733 4035830 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0115 14:12:12.257785 4035830 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1051-aws
	I0115 14:12:12.257817 4035830 kubeadm.go:322] OS: Linux
	I0115 14:12:12.257861 4035830 kubeadm.go:322] CGROUPS_CPU: enabled
	I0115 14:12:12.257907 4035830 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0115 14:12:12.257951 4035830 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0115 14:12:12.257996 4035830 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0115 14:12:12.258041 4035830 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0115 14:12:12.258086 4035830 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0115 14:12:12.258153 4035830 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0115 14:12:12.258245 4035830 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0115 14:12:12.258331 4035830 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0115 14:12:12.258427 4035830 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0115 14:12:12.258506 4035830 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0115 14:12:12.258549 4035830 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0115 14:12:12.258616 4035830 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0115 14:12:12.260958 4035830 out.go:204]   - Generating certificates and keys ...
	I0115 14:12:12.261047 4035830 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0115 14:12:12.261125 4035830 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0115 14:12:12.261192 4035830 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0115 14:12:12.261251 4035830 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0115 14:12:12.261313 4035830 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0115 14:12:12.261364 4035830 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0115 14:12:12.261418 4035830 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0115 14:12:12.261549 4035830 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-062316 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 14:12:12.261601 4035830 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0115 14:12:12.261724 4035830 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-062316 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0115 14:12:12.261791 4035830 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0115 14:12:12.261856 4035830 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0115 14:12:12.261901 4035830 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0115 14:12:12.261957 4035830 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0115 14:12:12.262017 4035830 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0115 14:12:12.262074 4035830 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0115 14:12:12.262138 4035830 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0115 14:12:12.262189 4035830 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0115 14:12:12.262255 4035830 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0115 14:12:12.264625 4035830 out.go:204]   - Booting up control plane ...
	I0115 14:12:12.264733 4035830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0115 14:12:12.264826 4035830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0115 14:12:12.264922 4035830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0115 14:12:12.265012 4035830 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0115 14:12:12.265181 4035830 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0115 14:12:12.265265 4035830 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002767 seconds
	I0115 14:12:12.265376 4035830 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0115 14:12:12.265515 4035830 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0115 14:12:12.265578 4035830 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0115 14:12:12.265720 4035830 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-062316 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0115 14:12:12.265782 4035830 kubeadm.go:322] [bootstrap-token] Using token: 48sbqt.jpo0npta1hwbc48y
	I0115 14:12:12.268067 4035830 out.go:204]   - Configuring RBAC rules ...
	I0115 14:12:12.268175 4035830 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0115 14:12:12.268273 4035830 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0115 14:12:12.268408 4035830 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0115 14:12:12.268536 4035830 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0115 14:12:12.268648 4035830 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0115 14:12:12.268731 4035830 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0115 14:12:12.268840 4035830 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0115 14:12:12.268895 4035830 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0115 14:12:12.268943 4035830 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0115 14:12:12.268951 4035830 kubeadm.go:322] 
	I0115 14:12:12.269008 4035830 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0115 14:12:12.269015 4035830 kubeadm.go:322] 
	I0115 14:12:12.269086 4035830 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0115 14:12:12.269093 4035830 kubeadm.go:322] 
	I0115 14:12:12.269117 4035830 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0115 14:12:12.269175 4035830 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0115 14:12:12.269225 4035830 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0115 14:12:12.269232 4035830 kubeadm.go:322] 
	I0115 14:12:12.269281 4035830 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0115 14:12:12.269354 4035830 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0115 14:12:12.269421 4035830 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0115 14:12:12.269428 4035830 kubeadm.go:322] 
	I0115 14:12:12.269512 4035830 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0115 14:12:12.269588 4035830 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0115 14:12:12.269595 4035830 kubeadm.go:322] 
	I0115 14:12:12.269673 4035830 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 48sbqt.jpo0npta1hwbc48y \
	I0115 14:12:12.269781 4035830 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7a6d785f4518c70e5cb54aff2b25c2e4257d667a1215c730d9bd23381d7f6388 \
	I0115 14:12:12.269806 4035830 kubeadm.go:322]     --control-plane 
	I0115 14:12:12.269810 4035830 kubeadm.go:322] 
	I0115 14:12:12.269890 4035830 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0115 14:12:12.269897 4035830 kubeadm.go:322] 
	I0115 14:12:12.269973 4035830 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 48sbqt.jpo0npta1hwbc48y \
	I0115 14:12:12.270086 4035830 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:7a6d785f4518c70e5cb54aff2b25c2e4257d667a1215c730d9bd23381d7f6388 
	I0115 14:12:12.270098 4035830 cni.go:84] Creating CNI manager for ""
	I0115 14:12:12.270106 4035830 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:12:12.274585 4035830 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0115 14:12:12.276730 4035830 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0115 14:12:12.282126 4035830 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0115 14:12:12.282146 4035830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0115 14:12:12.303316 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0115 14:12:12.750190 4035830 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0115 14:12:12.750323 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:12.750410 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e minikube.k8s.io/name=ingress-addon-legacy-062316 minikube.k8s.io/updated_at=2024_01_15T14_12_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:12.933982 4035830 ops.go:34] apiserver oom_adj: -16
	I0115 14:12:12.934104 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:13.434728 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:13.934403 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:14.435139 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:14.934736 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:15.434254 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:15.934881 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:16.434762 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:16.934177 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:17.435041 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:17.935003 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:18.434508 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:18.934801 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:19.434819 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:19.935138 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:20.434194 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:20.934984 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:21.434259 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:21.934639 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:22.434168 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:22.934795 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:23.434722 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:23.934719 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:24.434508 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:24.934275 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:25.434932 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:25.934383 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:26.434894 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:26.934975 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:27.435111 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:27.934805 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:28.434845 4035830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0115 14:12:28.608821 4035830 kubeadm.go:1088] duration metric: took 15.858541214s to wait for elevateKubeSystemPrivileges.
	I0115 14:12:28.608853 4035830 kubeadm.go:406] StartCluster complete in 36.912528507s
	I0115 14:12:28.608870 4035830 settings.go:142] acquiring lock: {Name:mkf7c3579062a76dbc15f21d34a0f70748bbdf8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:12:28.608930 4035830 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:12:28.609675 4035830 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17957-3996034/kubeconfig: {Name:mk3afa6cfd54a2e8849d9a076ecc839592eb1132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0115 14:12:28.610391 4035830 kapi.go:59] client config for ingress-addon-legacy-062316: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt", KeyFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.key", CAFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 14:12:28.611513 4035830 config.go:182] Loaded profile config "ingress-addon-legacy-062316": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I0115 14:12:28.611578 4035830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0115 14:12:28.611667 4035830 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0115 14:12:28.611727 4035830 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-062316"
	I0115 14:12:28.611741 4035830 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-062316"
	I0115 14:12:28.611780 4035830 host.go:66] Checking if "ingress-addon-legacy-062316" exists ...
	I0115 14:12:28.611807 4035830 cert_rotation.go:137] Starting client certificate rotation controller
	I0115 14:12:28.612193 4035830 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-062316"
	I0115 14:12:28.612232 4035830 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-062316"
	I0115 14:12:28.612247 4035830 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-062316 --format={{.State.Status}}
	I0115 14:12:28.612548 4035830 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-062316 --format={{.State.Status}}
	I0115 14:12:28.658457 4035830 kapi.go:59] client config for ingress-addon-legacy-062316: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt", KeyFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.key", CAFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 14:12:28.658722 4035830 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-062316"
	I0115 14:12:28.658750 4035830 host.go:66] Checking if "ingress-addon-legacy-062316" exists ...
	I0115 14:12:28.659228 4035830 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-062316 --format={{.State.Status}}
	I0115 14:12:28.665396 4035830 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0115 14:12:28.667439 4035830 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 14:12:28.667461 4035830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0115 14:12:28.667539 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:12:28.704713 4035830 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0115 14:12:28.704735 4035830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0115 14:12:28.704796 4035830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-062316
	I0115 14:12:28.732100 4035830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36459 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa Username:docker}
	I0115 14:12:28.747474 4035830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36459 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/ingress-addon-legacy-062316/id_rsa Username:docker}
	I0115 14:12:28.948317 4035830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0115 14:12:28.981771 4035830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0115 14:12:28.985621 4035830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0115 14:12:29.114341 4035830 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-062316" context rescaled to 1 replicas
	I0115 14:12:29.114395 4035830 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0115 14:12:29.116913 4035830 out.go:177] * Verifying Kubernetes components...
	I0115 14:12:29.118778 4035830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 14:12:29.386604 4035830 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0115 14:12:29.490175 4035830 kapi.go:59] client config for ingress-addon-legacy-062316: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt", KeyFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.key", CAFile:"/home/jenkins/minikube-integration/17957-3996034/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b9c50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0115 14:12:29.490453 4035830 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-062316" to be "Ready" ...
	I0115 14:12:29.509801 4035830 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0115 14:12:29.511892 4035830 addons.go:505] enable addons completed in 900.21683ms: enabled=[storage-provisioner default-storageclass]
	I0115 14:12:29.514994 4035830 node_ready.go:49] node "ingress-addon-legacy-062316" has status "Ready":"True"
	I0115 14:12:29.515018 4035830 node_ready.go:38] duration metric: took 24.485464ms waiting for node "ingress-addon-legacy-062316" to be "Ready" ...
	I0115 14:12:29.515029 4035830 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 14:12:29.528078 4035830 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-bnclc" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:31.534379 4035830 pod_ready.go:102] pod "coredns-66bff467f8-bnclc" in "kube-system" namespace has status "Ready":"False"
	I0115 14:12:34.039412 4035830 pod_ready.go:102] pod "coredns-66bff467f8-bnclc" in "kube-system" namespace has status "Ready":"False"
	I0115 14:12:36.533433 4035830 pod_ready.go:102] pod "coredns-66bff467f8-bnclc" in "kube-system" namespace has status "Ready":"False"
	I0115 14:12:38.533551 4035830 pod_ready.go:102] pod "coredns-66bff467f8-bnclc" in "kube-system" namespace has status "Ready":"False"
	I0115 14:12:40.533597 4035830 pod_ready.go:102] pod "coredns-66bff467f8-bnclc" in "kube-system" namespace has status "Ready":"False"
	I0115 14:12:43.034530 4035830 pod_ready.go:102] pod "coredns-66bff467f8-bnclc" in "kube-system" namespace has status "Ready":"False"
	I0115 14:12:45.034795 4035830 pod_ready.go:102] pod "coredns-66bff467f8-bnclc" in "kube-system" namespace has status "Ready":"False"
	I0115 14:12:45.533240 4035830 pod_ready.go:92] pod "coredns-66bff467f8-bnclc" in "kube-system" namespace has status "Ready":"True"
	I0115 14:12:45.533266 4035830 pod_ready.go:81] duration metric: took 16.005154233s waiting for pod "coredns-66bff467f8-bnclc" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.533277 4035830 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-062316" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.537788 4035830 pod_ready.go:92] pod "etcd-ingress-addon-legacy-062316" in "kube-system" namespace has status "Ready":"True"
	I0115 14:12:45.537811 4035830 pod_ready.go:81] duration metric: took 4.526215ms waiting for pod "etcd-ingress-addon-legacy-062316" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.537863 4035830 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-062316" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.542481 4035830 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-062316" in "kube-system" namespace has status "Ready":"True"
	I0115 14:12:45.542506 4035830 pod_ready.go:81] duration metric: took 4.626003ms waiting for pod "kube-apiserver-ingress-addon-legacy-062316" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.542517 4035830 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-062316" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.548207 4035830 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-062316" in "kube-system" namespace has status "Ready":"True"
	I0115 14:12:45.548236 4035830 pod_ready.go:81] duration metric: took 5.709886ms waiting for pod "kube-controller-manager-ingress-addon-legacy-062316" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.548249 4035830 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gnqnh" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.558875 4035830 pod_ready.go:92] pod "kube-proxy-gnqnh" in "kube-system" namespace has status "Ready":"True"
	I0115 14:12:45.558903 4035830 pod_ready.go:81] duration metric: took 10.646306ms waiting for pod "kube-proxy-gnqnh" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.558917 4035830 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-062316" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.729345 4035830 request.go:629] Waited for 170.282703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-062316
	I0115 14:12:45.928942 4035830 request.go:629] Waited for 196.166668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-062316
	I0115 14:12:45.931694 4035830 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-062316" in "kube-system" namespace has status "Ready":"True"
	I0115 14:12:45.931766 4035830 pod_ready.go:81] duration metric: took 372.819669ms waiting for pod "kube-scheduler-ingress-addon-legacy-062316" in "kube-system" namespace to be "Ready" ...
	I0115 14:12:45.931792 4035830 pod_ready.go:38] duration metric: took 16.416749361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0115 14:12:45.931819 4035830 api_server.go:52] waiting for apiserver process to appear ...
	I0115 14:12:45.931925 4035830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 14:12:45.954124 4035830 api_server.go:72] duration metric: took 16.839661877s to wait for apiserver process to appear ...
	I0115 14:12:45.954199 4035830 api_server.go:88] waiting for apiserver healthz status ...
	I0115 14:12:45.954233 4035830 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0115 14:12:45.963723 4035830 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0115 14:12:45.964813 4035830 api_server.go:141] control plane version: v1.18.20
	I0115 14:12:45.964862 4035830 api_server.go:131] duration metric: took 10.64217ms to wait for apiserver health ...
	I0115 14:12:45.964898 4035830 system_pods.go:43] waiting for kube-system pods to appear ...
	I0115 14:12:46.129281 4035830 request.go:629] Waited for 164.300275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0115 14:12:46.135358 4035830 system_pods.go:59] 8 kube-system pods found
	I0115 14:12:46.135393 4035830 system_pods.go:61] "coredns-66bff467f8-bnclc" [9d04a143-3352-4c2f-9033-a6c96c17e412] Running
	I0115 14:12:46.135400 4035830 system_pods.go:61] "etcd-ingress-addon-legacy-062316" [f79d5e5a-6c15-4cc2-8819-e026f31af2bb] Running
	I0115 14:12:46.135405 4035830 system_pods.go:61] "kindnet-qhtvb" [01d146ab-6c7b-40af-8092-cb381b8f4aee] Running
	I0115 14:12:46.135411 4035830 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-062316" [032800e0-ac5d-4227-a82a-3be14136d9d8] Running
	I0115 14:12:46.135416 4035830 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-062316" [13a8c762-fba2-405a-9d08-cdb19cf2eb82] Running
	I0115 14:12:46.135421 4035830 system_pods.go:61] "kube-proxy-gnqnh" [982a7f25-b4c9-4f91-b767-d05f70b99369] Running
	I0115 14:12:46.135430 4035830 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-062316" [7eb91e3a-7e5b-4a29-ba3c-5a54b989b5b7] Running
	I0115 14:12:46.135440 4035830 system_pods.go:61] "storage-provisioner" [44dab0bc-6d70-45f5-bf83-c61be3b3baa8] Running
	I0115 14:12:46.135446 4035830 system_pods.go:74] duration metric: took 170.527217ms to wait for pod list to return data ...
	I0115 14:12:46.135458 4035830 default_sa.go:34] waiting for default service account to be created ...
	I0115 14:12:46.328886 4035830 request.go:629] Waited for 193.339982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0115 14:12:46.331329 4035830 default_sa.go:45] found service account: "default"
	I0115 14:12:46.331356 4035830 default_sa.go:55] duration metric: took 195.89118ms for default service account to be created ...
	I0115 14:12:46.331367 4035830 system_pods.go:116] waiting for k8s-apps to be running ...
	I0115 14:12:46.528806 4035830 request.go:629] Waited for 197.330697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0115 14:12:46.534603 4035830 system_pods.go:86] 8 kube-system pods found
	I0115 14:12:46.534636 4035830 system_pods.go:89] "coredns-66bff467f8-bnclc" [9d04a143-3352-4c2f-9033-a6c96c17e412] Running
	I0115 14:12:46.534643 4035830 system_pods.go:89] "etcd-ingress-addon-legacy-062316" [f79d5e5a-6c15-4cc2-8819-e026f31af2bb] Running
	I0115 14:12:46.534648 4035830 system_pods.go:89] "kindnet-qhtvb" [01d146ab-6c7b-40af-8092-cb381b8f4aee] Running
	I0115 14:12:46.534654 4035830 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-062316" [032800e0-ac5d-4227-a82a-3be14136d9d8] Running
	I0115 14:12:46.534701 4035830 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-062316" [13a8c762-fba2-405a-9d08-cdb19cf2eb82] Running
	I0115 14:12:46.534710 4035830 system_pods.go:89] "kube-proxy-gnqnh" [982a7f25-b4c9-4f91-b767-d05f70b99369] Running
	I0115 14:12:46.534715 4035830 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-062316" [7eb91e3a-7e5b-4a29-ba3c-5a54b989b5b7] Running
	I0115 14:12:46.534724 4035830 system_pods.go:89] "storage-provisioner" [44dab0bc-6d70-45f5-bf83-c61be3b3baa8] Running
	I0115 14:12:46.534732 4035830 system_pods.go:126] duration metric: took 203.360156ms to wait for k8s-apps to be running ...
	I0115 14:12:46.534768 4035830 system_svc.go:44] waiting for kubelet service to be running ....
	I0115 14:12:46.534829 4035830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 14:12:46.548496 4035830 system_svc.go:56] duration metric: took 13.728672ms WaitForService to wait for kubelet.
	I0115 14:12:46.548561 4035830 kubeadm.go:581] duration metric: took 17.434105014s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0115 14:12:46.548587 4035830 node_conditions.go:102] verifying NodePressure condition ...
	I0115 14:12:46.729220 4035830 request.go:629] Waited for 180.564449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0115 14:12:46.732227 4035830 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0115 14:12:46.732257 4035830 node_conditions.go:123] node cpu capacity is 2
	I0115 14:12:46.732268 4035830 node_conditions.go:105] duration metric: took 183.675261ms to run NodePressure ...
	I0115 14:12:46.732282 4035830 start.go:228] waiting for startup goroutines ...
	I0115 14:12:46.732299 4035830 start.go:233] waiting for cluster config update ...
	I0115 14:12:46.732312 4035830 start.go:242] writing updated cluster config ...
	I0115 14:12:46.732607 4035830 ssh_runner.go:195] Run: rm -f paused
	I0115 14:12:46.790978 4035830 start.go:600] kubectl: 1.29.0, cluster: 1.18.20 (minor skew: 11)
	I0115 14:12:46.793029 4035830 out.go:177] 
	W0115 14:12:46.795035 4035830 out.go:239] ! /usr/local/bin/kubectl is version 1.29.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0115 14:12:46.797002 4035830 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0115 14:12:46.798815 4035830 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-062316" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c0fe9bd11c891       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app           2                   08cef7204bd49       hello-world-app-5f5d8b66bb-7vblk
	1c9e0101f6b06       74077e780ec71       33 seconds ago       Running             nginx                     0                   450cf6ce55e57       nginx
	35f2eeb51bf59       d7f0cba3aa5bf       50 seconds ago       Exited              controller                0                   82af00e583eda       ingress-nginx-controller-7fcf777cb7-qxd4p
	d3b8a30bb6496       a883f7fc35610       54 seconds ago       Exited              patch                     0                   d40f9cc233a30       ingress-nginx-admission-patch-lwwtx
	0eec5ccf7651e       a883f7fc35610       54 seconds ago       Exited              create                    0                   70ae6062877dd       ingress-nginx-admission-create-p6c74
	387f05e37e6f9       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   f6fc5706e1d4e       coredns-66bff467f8-bnclc
	889ba7a26c7bf       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   6341a4f97a106       storage-provisioner
	3369fa7b49e2c       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   95a559c86d4b2       kindnet-qhtvb
	25be33de0ac59       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   68c22054e68f0       kube-proxy-gnqnh
	56a2e266b2eec       095f37015706d       About a minute ago   Running             kube-scheduler            0                   b28838dd83b0d       kube-scheduler-ingress-addon-legacy-062316
	6c27f23b57268       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   058bd920dacc3       etcd-ingress-addon-legacy-062316
	4659107188f2a       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   c83236ff03773       kube-controller-manager-ingress-addon-legacy-062316
	9bb4bf812acbf       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   dcd0771353fdc       kube-apiserver-ingress-addon-legacy-062316
	
	
	==> containerd <==
	Jan 15 14:13:34 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:34.952505054Z" level=info msg="RemoveContainer for \"96aa5f5dfda53093243f2c25c4faedb63dea8a7eb68d1db31e47d72ffdfdf368\""
	Jan 15 14:13:34 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:34.958971612Z" level=info msg="RemoveContainer for \"96aa5f5dfda53093243f2c25c4faedb63dea8a7eb68d1db31e47d72ffdfdf368\" returns successfully"
	Jan 15 14:13:36 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:36.577007099Z" level=info msg="StopContainer for \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\" with timeout 2 (s)"
	Jan 15 14:13:36 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:36.577736694Z" level=info msg="Stop container \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\" with signal terminated"
	Jan 15 14:13:36 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:36.587871498Z" level=info msg="StopContainer for \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\" with timeout 2 (s)"
	Jan 15 14:13:36 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:36.595610595Z" level=info msg="Skipping the sending of signal terminated to container \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\" because a prior stop with timeout>0 request already sent the signal"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.596114448Z" level=info msg="Kill container \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\""
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.596125836Z" level=info msg="Kill container \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\""
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.677541551Z" level=info msg="shim disconnected" id=35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.677844467Z" level=warning msg="cleaning up after shim disconnected" id=35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f namespace=k8s.io
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.677930020Z" level=info msg="cleaning up dead shim"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.695388727Z" level=warning msg="cleanup warnings time=\"2024-01-15T14:13:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4630 runtime=io.containerd.runc.v2\n"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.698404453Z" level=info msg="StopContainer for \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\" returns successfully"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.698580883Z" level=info msg="StopContainer for \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\" returns successfully"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.699111387Z" level=info msg="StopPodSandbox for \"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\""
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.699364394Z" level=info msg="StopPodSandbox for \"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\""
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.699435858Z" level=info msg="Container to stop \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.699367290Z" level=info msg="Container to stop \"35f2eeb51bf596afcc8fa901294a4bb3f27186f41d836d34ad32dfe3a8e0929f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.738450200Z" level=info msg="shim disconnected" id=82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.738510810Z" level=warning msg="cleaning up after shim disconnected" id=82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b namespace=k8s.io
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.738520951Z" level=info msg="cleaning up dead shim"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.749132935Z" level=warning msg="cleanup warnings time=\"2024-01-15T14:13:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4669 runtime=io.containerd.runc.v2\n"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.792091543Z" level=error msg="StopPodSandbox for \"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\" failed" error="failed to destroy network for sandbox \"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-f2e86ed240a03939556de --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.838349815Z" level=info msg="TearDown network for sandbox \"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\" successfully"
	Jan 15 14:13:38 ingress-addon-legacy-062316 containerd[831]: time="2024-01-15T14:13:38.838401203Z" level=info msg="StopPodSandbox for \"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\" returns successfully"
	
	
	==> coredns [387f05e37e6f908835a1445872eab46932116febaff61e72b77f43e98e7fce1e] <==
	[INFO] 10.244.0.5:40030 - 37151 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069693s
	[INFO] 10.244.0.5:40030 - 38699 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001444333s
	[INFO] 10.244.0.5:49522 - 52024 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00219435s
	[INFO] 10.244.0.5:40030 - 54527 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002444485s
	[INFO] 10.244.0.5:49522 - 33242 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002421626s
	[INFO] 10.244.0.5:49522 - 29033 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000122984s
	[INFO] 10.244.0.5:40030 - 34351 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000158084s
	[INFO] 10.244.0.5:53247 - 29312 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076478s
	[INFO] 10.244.0.5:53247 - 62287 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00058569s
	[INFO] 10.244.0.5:32811 - 42491 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076445s
	[INFO] 10.244.0.5:32811 - 35637 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049935s
	[INFO] 10.244.0.5:32811 - 50130 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000161759s
	[INFO] 10.244.0.5:53247 - 61957 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075584s
	[INFO] 10.244.0.5:53247 - 20614 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056614s
	[INFO] 10.244.0.5:53247 - 40151 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00010316s
	[INFO] 10.244.0.5:32811 - 27667 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000146433s
	[INFO] 10.244.0.5:53247 - 5882 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061783s
	[INFO] 10.244.0.5:32811 - 52000 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033788s
	[INFO] 10.244.0.5:32811 - 48741 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063571s
	[INFO] 10.244.0.5:32811 - 42258 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000675033s
	[INFO] 10.244.0.5:32811 - 39997 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074681s
	[INFO] 10.244.0.5:32811 - 64610 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004137007s
	[INFO] 10.244.0.5:53247 - 48813 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00638428s
	[INFO] 10.244.0.5:53247 - 4540 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001073209s
	[INFO] 10.244.0.5:53247 - 60935 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044856s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-062316
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-062316
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=71cf7d00913f789829bf5813c1d11b9a83eda53e
	                    minikube.k8s.io/name=ingress-addon-legacy-062316
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_15T14_12_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Jan 2024 14:12:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-062316
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Jan 2024 14:13:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Jan 2024 14:13:15 +0000   Mon, 15 Jan 2024 14:12:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Jan 2024 14:13:15 +0000   Mon, 15 Jan 2024 14:12:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Jan 2024 14:13:15 +0000   Mon, 15 Jan 2024 14:12:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Jan 2024 14:13:15 +0000   Mon, 15 Jan 2024 14:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-062316
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 ace63469b0e140ef93c429cf1f8dcba6
	  System UUID:                7261e7f1-a159-4ee2-af77-ac175db42b1f
	  Boot ID:                    489f1f75-cead-4e0d-97ee-b5bdbf9f668e
	  Kernel Version:             5.15.0-1051-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.26
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7vblk                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 coredns-66bff467f8-bnclc                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     76s
	  kube-system                 etcd-ingress-addon-legacy-062316                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kindnet-qhtvb                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      76s
	  kube-system                 kube-apiserver-ingress-addon-legacy-062316             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-062316    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-gnqnh                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-ingress-addon-legacy-062316             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 103s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x4 over 103s)  kubelet     Node ingress-addon-legacy-062316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x4 over 103s)  kubelet     Node ingress-addon-legacy-062316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x3 over 103s)  kubelet     Node ingress-addon-legacy-062316 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 89s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s                  kubelet     Node ingress-addon-legacy-062316 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                  kubelet     Node ingress-addon-legacy-062316 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                  kubelet     Node ingress-addon-legacy-062316 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  89s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                  kubelet     Node ingress-addon-legacy-062316 status is now: NodeReady
	  Normal  Starting                 74s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001249] FS-Cache: O-key=[8] '2fe4c90000000000'
	[  +0.000750] FS-Cache: N-cookie c=000000d2 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001008] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=00000000a2b82f59
	[  +0.001133] FS-Cache: N-key=[8] '2fe4c90000000000'
	[  +0.002541] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001123] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=00000000cc6c9186
	[  +0.001173] FS-Cache: O-key=[8] '2fe4c90000000000'
	[  +0.000805] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=00000000ce970178
	[  +0.001161] FS-Cache: N-key=[8] '2fe4c90000000000'
	[  +2.917667] FS-Cache: Duplicate cookie detected
	[  +0.000724] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001121] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=000000002657d9d2
	[  +0.001143] FS-Cache: O-key=[8] '2ee4c90000000000'
	[  +0.000743] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000970] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=00000000a2b82f59
	[  +0.001139] FS-Cache: N-key=[8] '2ee4c90000000000'
	[  +0.422615] FS-Cache: Duplicate cookie detected
	[  +0.000867] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.001138] FS-Cache: O-cookie d=000000006e17dfe5{9p.inode} n=000000006c2f7aa3
	[  +0.001141] FS-Cache: O-key=[8] '34e4c90000000000'
	[  +0.000856] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.001120] FS-Cache: N-cookie d=000000006e17dfe5{9p.inode} n=00000000a9ffa64b
	[  +0.001140] FS-Cache: N-key=[8] '34e4c90000000000'
	
	
	==> etcd [6c27f23b572682de653e9400036ceb9c3f5c2f4df969717d5fbfd719278113c6] <==
	raft2024/01/15 14:12:03 INFO: aec36adc501070cc became follower at term 0
	raft2024/01/15 14:12:03 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/01/15 14:12:03 INFO: aec36adc501070cc became follower at term 1
	raft2024/01/15 14:12:03 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-15 14:12:04.079285 W | auth: simple token is not cryptographically signed
	2024-01-15 14:12:04.243364 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-15 14:12:04.327271 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/15 14:12:04 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-01-15 14:12:04.579316 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-01-15 14:12:04.580418 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-15 14:12:04.580586 I | embed: listening for peers on 192.168.49.2:2380
	2024-01-15 14:12:04.580713 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/01/15 14:12:05 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/01/15 14:12:05 INFO: aec36adc501070cc became candidate at term 2
	raft2024/01/15 14:12:05 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/01/15 14:12:05 INFO: aec36adc501070cc became leader at term 2
	raft2024/01/15 14:12:05 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-01-15 14:12:05.066531 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-15 14:12:05.067267 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-15 14:12:05.067466 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-15 14:12:05.067636 I | etcdserver: published {Name:ingress-addon-legacy-062316 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-01-15 14:12:05.067738 I | embed: ready to serve client requests
	2024-01-15 14:12:05.069204 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-15 14:12:05.069558 I | embed: ready to serve client requests
	2024-01-15 14:12:05.070837 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 14:13:44 up 18:56,  0 users,  load average: 0.78, 1.42, 2.02
	Linux ingress-addon-legacy-062316 5.15.0-1051-aws #56~20.04.1-Ubuntu SMP Tue Nov 28 15:43:06 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [3369fa7b49e2cab7d2cdba4c60362a46c1826accbac5e1f06bb62829e4fdbcf4] <==
	I0115 14:12:30.627366       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0115 14:12:30.627501       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0115 14:12:30.627697       1 main.go:116] setting mtu 1500 for CNI 
	I0115 14:12:30.627746       1 main.go:146] kindnetd IP family: "ipv4"
	I0115 14:12:30.627856       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0115 14:12:31.026321       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:12:31.026349       1 main.go:227] handling current node
	I0115 14:12:41.130067       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:12:41.130095       1 main.go:227] handling current node
	I0115 14:12:51.142373       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:12:51.142482       1 main.go:227] handling current node
	I0115 14:13:01.146908       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:13:01.146939       1 main.go:227] handling current node
	I0115 14:13:11.156892       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:13:11.156921       1 main.go:227] handling current node
	I0115 14:13:21.160215       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:13:21.160243       1 main.go:227] handling current node
	I0115 14:13:31.163348       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:13:31.163377       1 main.go:227] handling current node
	I0115 14:13:41.168721       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0115 14:13:41.168809       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9bb4bf812acbfd34b444e87eb9ce28e14f871d96371d57f0166ead46b568fe04] <==
	I0115 14:12:09.047676       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E0115 14:12:09.173118       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0115 14:12:09.238486       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0115 14:12:09.238733       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0115 14:12:09.238819       1 cache.go:39] Caches are synced for autoregister controller
	I0115 14:12:09.240023       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0115 14:12:09.261590       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0115 14:12:10.036192       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0115 14:12:10.036223       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0115 14:12:10.059149       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0115 14:12:10.067678       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0115 14:12:10.067887       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0115 14:12:10.447064       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0115 14:12:10.489183       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0115 14:12:10.580505       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0115 14:12:10.581463       1 controller.go:609] quota admission added evaluator for: endpoints
	I0115 14:12:10.584974       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0115 14:12:11.476916       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0115 14:12:12.155384       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0115 14:12:12.238518       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0115 14:12:15.547047       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0115 14:12:28.249774       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0115 14:12:28.249792       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0115 14:12:47.649878       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0115 14:13:08.284083       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [4659107188f2a4668c05601a0131042efd6702ece11d23d5d1d336c54219b172] <==
	I0115 14:12:28.409554       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-062316", UID:"c0562923-dab3-4393-92ad-046c8496b363", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-062316 event: Registered Node ingress-addon-legacy-062316 in Controller
	I0115 14:12:28.433496       1 shared_informer.go:230] Caches are synced for stateful set 
	I0115 14:12:28.433728       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"61e6aad0-ae43-480a-a7ba-7ea1b41bca71", APIVersion:"apps/v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-bnclc
	I0115 14:12:28.433834       1 shared_informer.go:230] Caches are synced for disruption 
	I0115 14:12:28.433929       1 disruption.go:339] Sending events to api server.
	I0115 14:12:28.474918       1 shared_informer.go:230] Caches are synced for attach detach 
	I0115 14:12:28.573576       1 shared_informer.go:230] Caches are synced for endpoint 
	I0115 14:12:28.676598       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3ef9bab6-6758-45b6-a86d-d4d7e3f1f945", APIVersion:"apps/v1", ResourceVersion:"375", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0115 14:12:28.723892       1 shared_informer.go:230] Caches are synced for HPA 
	I0115 14:12:28.757399       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"61e6aad0-ae43-480a-a7ba-7ea1b41bca71", APIVersion:"apps/v1", ResourceVersion:"376", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-klz8q
	I0115 14:12:28.809564       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0115 14:12:28.822209       1 shared_informer.go:230] Caches are synced for resource quota 
	I0115 14:12:28.828677       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0115 14:12:28.828696       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0115 14:12:28.877511       1 shared_informer.go:230] Caches are synced for resource quota 
	I0115 14:12:28.892914       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0115 14:12:47.671002       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"0558a3a3-2049-49b9-81ab-41cb21d631bd", APIVersion:"apps/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0115 14:12:47.671054       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5cee6015-5566-4b6b-9bbc-9c3f3a6bf0bd", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-p6c74
	I0115 14:12:47.672358       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"55730779-e26d-46fb-8f55-d2412b1ac590", APIVersion:"apps/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-qxd4p
	I0115 14:12:47.764072       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"8e23b836-ef75-42aa-a452-a970885667c3", APIVersion:"batch/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-lwwtx
	I0115 14:12:50.794600       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"8e23b836-ef75-42aa-a452-a970885667c3", APIVersion:"batch/v1", ResourceVersion:"506", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0115 14:12:50.837817       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5cee6015-5566-4b6b-9bbc-9c3f3a6bf0bd", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0115 14:13:17.043749       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b04cc7be-3bb8-4a0f-992b-5ec6a00f44ae", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0115 14:13:17.047268       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"aa4802d1-6bb3-4a30-9844-4991d42d4fb5", APIVersion:"apps/v1", ResourceVersion:"612", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7vblk
	E0115 14:13:41.239670       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-xjmpx" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [25be33de0ac596be349dd5a2be91b3b00dc3bc50084f2de12ca39aa492a2cd8b] <==
	W0115 14:12:30.576688       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0115 14:12:30.588644       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0115 14:12:30.588870       1 server_others.go:186] Using iptables Proxier.
	I0115 14:12:30.589404       1 server.go:583] Version: v1.18.20
	I0115 14:12:30.592241       1 config.go:315] Starting service config controller
	I0115 14:12:30.592319       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0115 14:12:30.592418       1 config.go:133] Starting endpoints config controller
	I0115 14:12:30.592463       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0115 14:12:30.693848       1 shared_informer.go:230] Caches are synced for service config 
	I0115 14:12:30.693848       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [56a2e266b2eecf6e647e9299058ae874ccfadf0d52c3ac8439be3f0049b00197] <==
	W0115 14:12:09.209935       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0115 14:12:09.210131       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0115 14:12:09.210213       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0115 14:12:09.210294       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0115 14:12:09.277220       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0115 14:12:09.277416       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0115 14:12:09.279780       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0115 14:12:09.280458       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 14:12:09.280588       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0115 14:12:09.280741       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0115 14:12:09.282839       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0115 14:12:09.284947       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0115 14:12:09.291868       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0115 14:12:09.295933       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0115 14:12:09.299555       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0115 14:12:09.299806       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0115 14:12:09.300013       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0115 14:12:09.300188       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0115 14:12:09.300377       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0115 14:12:09.300567       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0115 14:12:09.302120       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0115 14:12:09.302310       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0115 14:12:10.109159       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0115 14:12:10.580841       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0115 14:12:28.529689       1 factory.go:503] pod kube-system/coredns-66bff467f8-klz8q is already present in the backoff queue
	
	
	==> kubelet <==
	Jan 15 14:13:29 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:29.650667    1641 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c03aa7f4a868c351a37ca1a1a86c01a033905f57885a04ad4bbfaf3d10d98116
	Jan 15 14:13:29 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:29.651040    1641 pod_workers.go:191] Error syncing pod e478593e-53df-4d86-bb73-45d50902217f ("kube-ingress-dns-minikube_kube-system(e478593e-53df-4d86-bb73-45d50902217f)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e478593e-53df-4d86-bb73-45d50902217f)"
	Jan 15 14:13:33 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:33.019010    1641 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-cqwxh" (UniqueName: "kubernetes.io/secret/e478593e-53df-4d86-bb73-45d50902217f-minikube-ingress-dns-token-cqwxh") pod "e478593e-53df-4d86-bb73-45d50902217f" (UID: "e478593e-53df-4d86-bb73-45d50902217f")
	Jan 15 14:13:33 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:33.023586    1641 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e478593e-53df-4d86-bb73-45d50902217f-minikube-ingress-dns-token-cqwxh" (OuterVolumeSpecName: "minikube-ingress-dns-token-cqwxh") pod "e478593e-53df-4d86-bb73-45d50902217f" (UID: "e478593e-53df-4d86-bb73-45d50902217f"). InnerVolumeSpecName "minikube-ingress-dns-token-cqwxh". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 14:13:33 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:33.121002    1641 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-cqwxh" (UniqueName: "kubernetes.io/secret/e478593e-53df-4d86-bb73-45d50902217f-minikube-ingress-dns-token-cqwxh") on node "ingress-addon-legacy-062316" DevicePath ""
	Jan 15 14:13:33 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:33.945522    1641 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c03aa7f4a868c351a37ca1a1a86c01a033905f57885a04ad4bbfaf3d10d98116
	Jan 15 14:13:34 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:34.650660    1641 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 96aa5f5dfda53093243f2c25c4faedb63dea8a7eb68d1db31e47d72ffdfdf368
	Jan 15 14:13:34 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:34.949211    1641 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 96aa5f5dfda53093243f2c25c4faedb63dea8a7eb68d1db31e47d72ffdfdf368
	Jan 15 14:13:34 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:34.949562    1641 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c0fe9bd11c8913936e42d925ebc44e75c37f68ab8b4cc2303c1f67078ac1f545
	Jan 15 14:13:34 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:34.949819    1641 pod_workers.go:191] Error syncing pod aa5517c2-bfee-4127-9653-e77f7dbc0c22 ("hello-world-app-5f5d8b66bb-7vblk_default(aa5517c2-bfee-4127-9653-e77f7dbc0c22)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7vblk_default(aa5517c2-bfee-4127-9653-e77f7dbc0c22)"
	Jan 15 14:13:36 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:36.581880    1641 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qxd4p.17aa8acbbd8ca73f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qxd4p", UID:"d770750a-680c-47df-8f8b-236661d5959c", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-062316"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1616d442259073f, ext:84511713736, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1616d442259073f, ext:84511713736, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-qxd4p.17aa8acbbd8ca73f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 14:13:36 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:36.593334    1641 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qxd4p.17aa8acbbd8ca73f", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qxd4p", UID:"d770750a-680c-47df-8f8b-236661d5959c", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-062316"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1616d442259073f, ext:84511713736, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1616d4422f64382, ext:84522018315, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-qxd4p.17aa8acbbd8ca73f" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 14:13:38 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:38.792367    1641 remote_runtime.go:128] StopPodSandbox "82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b": plugin type="portmap" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-f2e86ed240a03939556de --wait]: exit status 1: iptables: No chain/target/match by that name.
	Jan 15 14:13:38 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:38.792430    1641 kuberuntime_manager.go:912] Failed to stop sandbox {"containerd" "82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b"}
	Jan 15 14:13:38 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:38.792491    1641 kubelet.go:1598] error killing pod: failed to "KillPodSandbox" for "d770750a-680c-47df-8f8b-236661d5959c" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-f2e86ed240a03939556de --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Jan 15 14:13:38 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:38.792509    1641 pod_workers.go:191] Error syncing pod d770750a-680c-47df-8f8b-236661d5959c ("ingress-nginx-controller-7fcf777cb7-qxd4p_ingress-nginx(d770750a-680c-47df-8f8b-236661d5959c)"), skipping: error killing pod: failed to "KillPodSandbox" for "d770750a-680c-47df-8f8b-236661d5959c" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\": plugin type=\"portmap\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-f2e86ed240a03939556de --wait]: exit status 1: iptables: No chain/target/match by that name.\n"
	Jan 15 14:13:38 ingress-addon-legacy-062316 kubelet[1641]: E0115 14:13:38.806590    1641 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-qxd4p.17aa8acc41a56bde", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-qxd4p", UID:"d770750a-680c-47df-8f8b-236661d5959c", APIVersion:"v1", ResourceVersion:"486", FieldPath:""}, Reason:"FailedKillPod", Message:"error killing pod: failed t
o \"KillPodSandbox\" for \"d770750a-680c-47df-8f8b-236661d5959c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b\\\": plugin type=\\\"portmap\\\" failed (delete): could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -F CNI-DN-f2e86ed240a03939556de --wait]: exit status 1: iptables: No chain/target/match by that name.\\n\"", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-062316"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1616d44af3c37de, ext:86727929447, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1616d44af3c37de, ext:86727929447, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx
-controller-7fcf777cb7-qxd4p.17aa8acc41a56bde" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 15 14:13:38 ingress-addon-legacy-062316 kubelet[1641]: W0115 14:13:38.969085    1641 pod_container_deletor.go:77] Container "82af00e583eda2665c27c7be22ac80fba0b2606b6f253ad995cea4fce6453d4b" not found in pod's containers
	Jan 15 14:13:40 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:40.742698    1641 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-tvltz" (UniqueName: "kubernetes.io/secret/d770750a-680c-47df-8f8b-236661d5959c-ingress-nginx-token-tvltz") pod "d770750a-680c-47df-8f8b-236661d5959c" (UID: "d770750a-680c-47df-8f8b-236661d5959c")
	Jan 15 14:13:40 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:40.742747    1641 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d770750a-680c-47df-8f8b-236661d5959c-webhook-cert") pod "d770750a-680c-47df-8f8b-236661d5959c" (UID: "d770750a-680c-47df-8f8b-236661d5959c")
	Jan 15 14:13:40 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:40.747987    1641 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d770750a-680c-47df-8f8b-236661d5959c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d770750a-680c-47df-8f8b-236661d5959c" (UID: "d770750a-680c-47df-8f8b-236661d5959c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 14:13:40 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:40.749946    1641 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d770750a-680c-47df-8f8b-236661d5959c-ingress-nginx-token-tvltz" (OuterVolumeSpecName: "ingress-nginx-token-tvltz") pod "d770750a-680c-47df-8f8b-236661d5959c" (UID: "d770750a-680c-47df-8f8b-236661d5959c"). InnerVolumeSpecName "ingress-nginx-token-tvltz". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 15 14:13:40 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:40.843067    1641 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d770750a-680c-47df-8f8b-236661d5959c-webhook-cert") on node "ingress-addon-legacy-062316" DevicePath ""
	Jan 15 14:13:40 ingress-addon-legacy-062316 kubelet[1641]: I0115 14:13:40.843118    1641 reconciler.go:319] Volume detached for volume "ingress-nginx-token-tvltz" (UniqueName: "kubernetes.io/secret/d770750a-680c-47df-8f8b-236661d5959c-ingress-nginx-token-tvltz") on node "ingress-addon-legacy-062316" DevicePath ""
	Jan 15 14:13:41 ingress-addon-legacy-062316 kubelet[1641]: W0115 14:13:41.657164    1641 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/d770750a-680c-47df-8f8b-236661d5959c/volumes" does not exist
	
	
	==> storage-provisioner [889ba7a26c7bf0ba014cc8320b96744a2b094857116612e9e929a4d788adf4c4] <==
	I0115 14:12:31.869209       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0115 14:12:31.880913       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0115 14:12:31.881062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0115 14:12:31.888242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0115 14:12:31.888768       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a0529d0b-84c5-46eb-a79b-0e2588fc2be7", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-062316_bb3e1222-a8a1-40d5-b7db-262dd1ad83d8 became leader
	I0115 14:12:31.888918       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-062316_bb3e1222-a8a1-40d5-b7db-262dd1ad83d8!
	I0115 14:12:31.989305       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-062316_bb3e1222-a8a1-40d5-b7db-262dd1ad83d8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-062316 -n ingress-addon-legacy-062316
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-062316 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (49.62s)

                                                
                                    

Test pass (281/320)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.17
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
9 TestDownloadOnly/v1.16.0/DeleteAll 0.23
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.28.4/json-events 13.1
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.1
18 TestDownloadOnly/v1.28.4/DeleteAll 0.24
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 12.28
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.23
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.4
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.26
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
36 TestAddons/Setup 142.61
38 TestAddons/parallel/Registry 15.6
40 TestAddons/parallel/InspektorGadget 10.88
41 TestAddons/parallel/MetricsServer 6.91
44 TestAddons/parallel/CSI 62.09
45 TestAddons/parallel/Headlamp 10.53
47 TestAddons/parallel/LocalPath 53.86
48 TestAddons/parallel/NvidiaDevicePlugin 5.56
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.35
54 TestCertOptions 37.14
55 TestCertExpiration 234.62
57 TestForceSystemdFlag 39.52
58 TestForceSystemdEnv 44.31
59 TestDockerEnvContainerd 45.52
64 TestErrorSpam/setup 35.61
65 TestErrorSpam/start 0.86
66 TestErrorSpam/status 1.11
67 TestErrorSpam/pause 1.86
68 TestErrorSpam/unpause 1.95
69 TestErrorSpam/stop 1.5
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 55.89
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.26
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.03
81 TestFunctional/serial/CacheCmd/cache/add_local 1.47
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
86 TestFunctional/serial/CacheCmd/cache/delete 0.15
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
89 TestFunctional/serial/ExtraConfig 45.57
90 TestFunctional/serial/ComponentHealth 0.11
91 TestFunctional/serial/LogsCmd 1.81
92 TestFunctional/serial/LogsFileCmd 1.84
93 TestFunctional/serial/InvalidService 4.77
95 TestFunctional/parallel/ConfigCmd 0.61
96 TestFunctional/parallel/DashboardCmd 8.84
97 TestFunctional/parallel/DryRun 0.52
98 TestFunctional/parallel/InternationalLanguage 0.26
99 TestFunctional/parallel/StatusCmd 1.17
103 TestFunctional/parallel/ServiceCmdConnect 10.73
104 TestFunctional/parallel/AddonsCmd 0.2
105 TestFunctional/parallel/PersistentVolumeClaim 23.78
107 TestFunctional/parallel/SSHCmd 0.79
108 TestFunctional/parallel/CpCmd 2.59
110 TestFunctional/parallel/FileSync 0.34
111 TestFunctional/parallel/CertSync 2.43
115 TestFunctional/parallel/NodeLabels 0.14
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
119 TestFunctional/parallel/License 0.38
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.54
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
133 TestFunctional/parallel/ProfileCmd/profile_list 0.42
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
135 TestFunctional/parallel/MountCmd/any-port 7.87
136 TestFunctional/parallel/ServiceCmd/List 0.68
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
139 TestFunctional/parallel/ServiceCmd/Format 0.41
140 TestFunctional/parallel/ServiceCmd/URL 0.54
141 TestFunctional/parallel/MountCmd/specific-port 2.46
142 TestFunctional/parallel/MountCmd/VerifyCleanup 3.03
143 TestFunctional/parallel/Version/short 0.11
144 TestFunctional/parallel/Version/components 1.36
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.69
150 TestFunctional/parallel/ImageCommands/Setup 1.72
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
161 TestFunctional/delete_addon-resizer_images 0.09
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestIngressAddonLegacy/StartLegacyK8sCluster 90.29
169 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.43
170 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.66
174 TestJSONOutput/start/Command 62.33
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/pause/Command 0.83
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/unpause/Command 0.75
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 5.87
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.27
199 TestKicCustomNetwork/create_custom_network 43.93
200 TestKicCustomNetwork/use_default_bridge_network 35.75
201 TestKicExistingNetwork 35.06
202 TestKicCustomSubnet 36.97
203 TestKicStaticIP 35.12
204 TestMainNoArgs 0.07
205 TestMinikubeProfile 69.75
208 TestMountStart/serial/StartWithMountFirst 9.38
209 TestMountStart/serial/VerifyMountFirst 0.29
210 TestMountStart/serial/StartWithMountSecond 6.5
211 TestMountStart/serial/VerifyMountSecond 0.29
212 TestMountStart/serial/DeleteFirst 1.66
213 TestMountStart/serial/VerifyMountPostDelete 0.29
214 TestMountStart/serial/Stop 1.23
215 TestMountStart/serial/RestartStopped 7.63
216 TestMountStart/serial/VerifyMountPostStop 0.29
219 TestMultiNode/serial/FreshStart2Nodes 77.99
220 TestMultiNode/serial/DeployApp2Nodes 5.8
221 TestMultiNode/serial/PingHostFrom2Pods 1.08
222 TestMultiNode/serial/AddNode 17.94
223 TestMultiNode/serial/MultiNodeLabels 0.09
224 TestMultiNode/serial/ProfileList 0.35
225 TestMultiNode/serial/CopyFile 11.23
226 TestMultiNode/serial/StopNode 2.44
227 TestMultiNode/serial/StartAfterStop 12.3
228 TestMultiNode/serial/RestartKeepsNodes 117.83
229 TestMultiNode/serial/DeleteNode 5.2
230 TestMultiNode/serial/StopMultiNode 24.11
231 TestMultiNode/serial/RestartMultiNode 78.69
232 TestMultiNode/serial/ValidateNameConflict 36.6
237 TestPreload 156.71
239 TestScheduledStopUnix 104.44
242 TestInsufficientStorage 10.81
243 TestRunningBinaryUpgrade 81.68
245 TestKubernetesUpgrade 376.22
246 TestMissingContainerUpgrade 164.13
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
249 TestNoKubernetes/serial/StartWithK8s 38.13
250 TestNoKubernetes/serial/StartWithStopK8s 19.36
251 TestNoKubernetes/serial/Start 6.36
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
253 TestNoKubernetes/serial/ProfileList 1.12
254 TestNoKubernetes/serial/Stop 1.3
255 TestNoKubernetes/serial/StartNoArgs 8.32
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.54
257 TestStoppedBinaryUpgrade/Setup 1.17
258 TestStoppedBinaryUpgrade/Upgrade 109.99
259 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
268 TestPause/serial/Start 58.78
269 TestPause/serial/SecondStartNoReconfiguration 7.37
270 TestPause/serial/Pause 1.07
271 TestPause/serial/VerifyStatus 0.48
272 TestPause/serial/Unpause 0.92
273 TestPause/serial/PauseAgain 1.1
274 TestPause/serial/DeletePaused 3.27
275 TestPause/serial/VerifyDeletedResources 12.88
283 TestNetworkPlugins/group/false 5.26
288 TestStartStop/group/old-k8s-version/serial/FirstStart 128.47
289 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
290 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
291 TestStartStop/group/old-k8s-version/serial/Stop 12.14
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
293 TestStartStop/group/old-k8s-version/serial/SecondStart 647.3
295 TestStartStop/group/no-preload/serial/FirstStart 68.1
296 TestStartStop/group/no-preload/serial/DeployApp 8.35
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
298 TestStartStop/group/no-preload/serial/Stop 12.14
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
300 TestStartStop/group/no-preload/serial/SecondStart 337.8
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
303 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
304 TestStartStop/group/no-preload/serial/Pause 3.33
306 TestStartStop/group/embed-certs/serial/FirstStart 58.27
307 TestStartStop/group/embed-certs/serial/DeployApp 8.33
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
309 TestStartStop/group/embed-certs/serial/Stop 12.11
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
311 TestStartStop/group/embed-certs/serial/SecondStart 340.49
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
315 TestStartStop/group/old-k8s-version/serial/Pause 3.5
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 58.14
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.16
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 347.25
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
326 TestStartStop/group/embed-certs/serial/Pause 3.44
328 TestStartStop/group/newest-cni/serial/FirstStart 47.42
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.18
331 TestStartStop/group/newest-cni/serial/Stop 1.3
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
333 TestStartStop/group/newest-cni/serial/SecondStart 31.4
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
337 TestStartStop/group/newest-cni/serial/Pause 3.39
338 TestNetworkPlugins/group/auto/Start 49.7
339 TestNetworkPlugins/group/auto/KubeletFlags 0.33
340 TestNetworkPlugins/group/auto/NetCatPod 9.28
341 TestNetworkPlugins/group/auto/DNS 0.29
342 TestNetworkPlugins/group/auto/Localhost 0.22
343 TestNetworkPlugins/group/auto/HairPin 0.22
344 TestNetworkPlugins/group/kindnet/Start 63.96
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.4
349 TestNetworkPlugins/group/calico/Start 74.05
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
352 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
353 TestNetworkPlugins/group/kindnet/DNS 0.2
354 TestNetworkPlugins/group/kindnet/Localhost 0.21
355 TestNetworkPlugins/group/kindnet/HairPin 0.21
356 TestNetworkPlugins/group/custom-flannel/Start 62.27
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.39
359 TestNetworkPlugins/group/calico/NetCatPod 12.32
360 TestNetworkPlugins/group/calico/DNS 0.33
361 TestNetworkPlugins/group/calico/Localhost 0.22
362 TestNetworkPlugins/group/calico/HairPin 0.29
363 TestNetworkPlugins/group/enable-default-cni/Start 88.06
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.31
366 TestNetworkPlugins/group/custom-flannel/DNS 0.28
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
369 TestNetworkPlugins/group/flannel/Start 57.61
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
377 TestNetworkPlugins/group/flannel/NetCatPod 10.42
378 TestNetworkPlugins/group/bridge/Start 86.07
379 TestNetworkPlugins/group/flannel/DNS 0.27
380 TestNetworkPlugins/group/flannel/Localhost 0.23
381 TestNetworkPlugins/group/flannel/HairPin 0.22
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
383 TestNetworkPlugins/group/bridge/NetCatPod 9.28
384 TestNetworkPlugins/group/bridge/DNS 0.18
385 TestNetworkPlugins/group/bridge/Localhost 0.16
386 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (14.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-450455 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-450455 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.165345912s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-450455
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-450455: exit status 85 (87.079358ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-450455 | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC |          |
	|         | -p download-only-450455        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 14:00:45
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 14:00:45.060855 4001374 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:00:45.061086 4001374 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:00:45.061113 4001374 out.go:309] Setting ErrFile to fd 2...
	I0115 14:00:45.061136 4001374 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:00:45.061443 4001374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	W0115 14:00:45.061646 4001374 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17957-3996034/.minikube/config/config.json: open /home/jenkins/minikube-integration/17957-3996034/.minikube/config/config.json: no such file or directory
	I0115 14:00:45.062209 4001374 out.go:303] Setting JSON to true
	I0115 14:00:45.063337 4001374 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":67388,"bootTime":1705259857,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:00:45.063447 4001374 start.go:138] virtualization:  
	I0115 14:00:45.066812 4001374 out.go:97] [download-only-450455] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 14:00:45.068873 4001374 out.go:169] MINIKUBE_LOCATION=17957
	W0115 14:00:45.067189 4001374 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball: no such file or directory
	I0115 14:00:45.067291 4001374 notify.go:220] Checking for updates...
	I0115 14:00:45.073963 4001374 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:00:45.076114 4001374 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:00:45.078157 4001374 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:00:45.080058 4001374 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0115 14:00:45.083797 4001374 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 14:00:45.084171 4001374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:00:45.112000 4001374 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:00:45.112124 4001374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:00:45.208049 4001374 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-15 14:00:45.196995092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:00:45.208162 4001374 docker.go:295] overlay module found
	I0115 14:00:45.211903 4001374 out.go:97] Using the docker driver based on user configuration
	I0115 14:00:45.211931 4001374 start.go:298] selected driver: docker
	I0115 14:00:45.211950 4001374 start.go:902] validating driver "docker" against <nil>
	I0115 14:00:45.212072 4001374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:00:45.278993 4001374 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-01-15 14:00:45.269348899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:00:45.279153 4001374 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 14:00:45.279488 4001374 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0115 14:00:45.279674 4001374 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 14:00:45.282035 4001374 out.go:169] Using Docker driver with root privileges
	I0115 14:00:45.284207 4001374 cni.go:84] Creating CNI manager for ""
	I0115 14:00:45.284233 4001374 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:00:45.284254 4001374 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 14:00:45.284341 4001374 start_flags.go:321] config:
	{Name:download-only-450455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-450455 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:00:45.286504 4001374 out.go:97] Starting control plane node download-only-450455 in cluster download-only-450455
	I0115 14:00:45.286528 4001374 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0115 14:00:45.288151 4001374 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 14:00:45.288185 4001374 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0115 14:00:45.288279 4001374 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 14:00:45.306013 4001374 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 14:00:45.306685 4001374 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 14:00:45.306801 4001374 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 14:00:45.388891 4001374 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0115 14:00:45.388922 4001374 cache.go:56] Caching tarball of preloaded images
	I0115 14:00:45.389078 4001374 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0115 14:00:45.391422 4001374 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0115 14:00:45.391441 4001374 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I0115 14:00:45.499691 4001374 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I0115 14:00:50.232440 4001374 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-450455"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-450455
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-851187 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-851187 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.102909685s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-851187
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-851187: exit status 85 (96.035992ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-450455 | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC |                     |
	|         | -p download-only-450455        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC | 15 Jan 24 14:00 UTC |
	| delete  | -p download-only-450455        | download-only-450455 | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC | 15 Jan 24 14:00 UTC |
	| start   | -o=json --download-only        | download-only-851187 | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC |                     |
	|         | -p download-only-851187        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 14:00:59
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 14:00:59.696050 4001539 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:00:59.696447 4001539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:00:59.696458 4001539 out.go:309] Setting ErrFile to fd 2...
	I0115 14:00:59.696465 4001539 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:00:59.696720 4001539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:00:59.697144 4001539 out.go:303] Setting JSON to true
	I0115 14:00:59.697980 4001539 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":67403,"bootTime":1705259857,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:00:59.698054 4001539 start.go:138] virtualization:  
	I0115 14:00:59.700562 4001539 out.go:97] [download-only-851187] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 14:00:59.702658 4001539 out.go:169] MINIKUBE_LOCATION=17957
	I0115 14:00:59.700923 4001539 notify.go:220] Checking for updates...
	I0115 14:00:59.706618 4001539 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:00:59.708389 4001539 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:00:59.710076 4001539 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:00:59.712327 4001539 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0115 14:00:59.716197 4001539 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 14:00:59.716515 4001539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:00:59.740643 4001539 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:00:59.740754 4001539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:00:59.830580 4001539 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 14:00:59.820350752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:00:59.830697 4001539 docker.go:295] overlay module found
	I0115 14:00:59.832561 4001539 out.go:97] Using the docker driver based on user configuration
	I0115 14:00:59.832585 4001539 start.go:298] selected driver: docker
	I0115 14:00:59.832592 4001539 start.go:902] validating driver "docker" against <nil>
	I0115 14:00:59.832693 4001539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:00:59.898722 4001539 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 14:00:59.888948606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:00:59.898894 4001539 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 14:00:59.899176 4001539 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0115 14:00:59.899379 4001539 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 14:00:59.901501 4001539 out.go:169] Using Docker driver with root privileges
	I0115 14:00:59.903418 4001539 cni.go:84] Creating CNI manager for ""
	I0115 14:00:59.903446 4001539 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:00:59.903459 4001539 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 14:00:59.903475 4001539 start_flags.go:321] config:
	{Name:download-only-851187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-851187 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:00:59.905729 4001539 out.go:97] Starting control plane node download-only-851187 in cluster download-only-851187
	I0115 14:00:59.905749 4001539 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0115 14:00:59.907615 4001539 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 14:00:59.907639 4001539 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 14:00:59.907803 4001539 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 14:00:59.924128 4001539 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 14:00:59.924264 4001539 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 14:00:59.924289 4001539 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 14:00:59.924297 4001539 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 14:00:59.924305 4001539 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 14:00:59.972680 4001539 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0115 14:00:59.972707 4001539 cache.go:56] Caching tarball of preloaded images
	I0115 14:00:59.972887 4001539 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0115 14:00:59.975194 4001539 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0115 14:00:59.975215 4001539 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0115 14:01:00.087164 4001539 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-851187"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-851187
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (12.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-168263 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-168263 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.28149858s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (12.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-168263
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-168263: exit status 85 (230.427406ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-450455 | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC |                     |
	|         | -p download-only-450455           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC | 15 Jan 24 14:00 UTC |
	| delete  | -p download-only-450455           | download-only-450455 | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC | 15 Jan 24 14:00 UTC |
	| start   | -o=json --download-only           | download-only-851187 | jenkins | v1.32.0 | 15 Jan 24 14:00 UTC |                     |
	|         | -p download-only-851187           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| delete  | -p download-only-851187           | download-only-851187 | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC | 15 Jan 24 14:01 UTC |
	| start   | -o=json --download-only           | download-only-168263 | jenkins | v1.32.0 | 15 Jan 24 14:01 UTC |                     |
	|         | -p download-only-168263           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/15 14:01:13
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0115 14:01:13.278179 4001700 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:01:13.278398 4001700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:01:13.278428 4001700 out.go:309] Setting ErrFile to fd 2...
	I0115 14:01:13.278448 4001700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:01:13.278727 4001700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:01:13.279184 4001700 out.go:303] Setting JSON to true
	I0115 14:01:13.280057 4001700 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":67417,"bootTime":1705259857,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:01:13.280166 4001700 start.go:138] virtualization:  
	I0115 14:01:13.283074 4001700 out.go:97] [download-only-168263] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 14:01:13.285311 4001700 out.go:169] MINIKUBE_LOCATION=17957
	I0115 14:01:13.283424 4001700 notify.go:220] Checking for updates...
	I0115 14:01:13.289251 4001700 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:01:13.291353 4001700 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:01:13.293211 4001700 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:01:13.295096 4001700 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0115 14:01:13.298638 4001700 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0115 14:01:13.298973 4001700 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:01:13.326303 4001700 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:01:13.326428 4001700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:01:13.410171 4001700 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 14:01:13.399839155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:01:13.410288 4001700 docker.go:295] overlay module found
	I0115 14:01:13.412399 4001700 out.go:97] Using the docker driver based on user configuration
	I0115 14:01:13.412430 4001700 start.go:298] selected driver: docker
	I0115 14:01:13.412437 4001700 start.go:902] validating driver "docker" against <nil>
	I0115 14:01:13.412535 4001700 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:01:13.477596 4001700 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2024-01-15 14:01:13.467613163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:01:13.477756 4001700 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0115 14:01:13.478046 4001700 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0115 14:01:13.478231 4001700 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0115 14:01:13.480540 4001700 out.go:169] Using Docker driver with root privileges
	I0115 14:01:13.482476 4001700 cni.go:84] Creating CNI manager for ""
	I0115 14:01:13.482514 4001700 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0115 14:01:13.482528 4001700 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0115 14:01:13.482542 4001700 start_flags.go:321] config:
	{Name:download-only-168263 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-168263 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:01:13.484835 4001700 out.go:97] Starting control plane node download-only-168263 in cluster download-only-168263
	I0115 14:01:13.484857 4001700 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0115 14:01:13.487174 4001700 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0115 14:01:13.487198 4001700 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0115 14:01:13.487314 4001700 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0115 14:01:13.504923 4001700 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0115 14:01:13.505104 4001700 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0115 14:01:13.505124 4001700 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0115 14:01:13.505129 4001700 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0115 14:01:13.505137 4001700 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0115 14:01:13.556591 4001700 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0115 14:01:13.556636 4001700 cache.go:56] Caching tarball of preloaded images
	I0115 14:01:13.557181 4001700 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0115 14:01:13.559438 4001700 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0115 14:01:13.559455 4001700 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0115 14:01:13.666607 4001700 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/17957-3996034/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-168263"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-168263
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-093958 --alsologtostderr --binary-mirror http://127.0.0.1:41435 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-093958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-093958
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-916083
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-916083: exit status 85 (92.052565ms)

                                                
                                                
-- stdout --
	* Profile "addons-916083" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-916083"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-916083
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-916083: exit status 85 (100.524897ms)

                                                
                                                
-- stdout --
	* Profile "addons-916083" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-916083"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (142.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-916083 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-916083 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m22.606171182s)
--- PASS: TestAddons/Setup (142.61s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 45.579412ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-htcrm" [51ffa260-a633-46c3-8d2c-1a9690503666] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004757576s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-74zd5" [f108cc01-7802-4b5f-8935-c829e0ac2f02] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005125459s
addons_test.go:340: (dbg) Run:  kubectl --context addons-916083 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-916083 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-916083 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.394443291s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 ip
2024/01/15 14:04:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.60s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dxgjd" [9638f63c-6219-4ff4-afe4-37e8cb6c5fe7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004804082s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-916083
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-916083: (5.876619975s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.256235ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-2qp4d" [d0a4b682-7faf-459b-a7d0-8873c8b2db17] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0049851s
addons_test.go:415: (dbg) Run:  kubectl --context addons-916083 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.91s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 46.304109ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-916083 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-916083 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4c4f6ac4-8b01-4a8c-a73b-59de4bfa76d8] Pending
helpers_test.go:344: "task-pv-pod" [4c4f6ac4-8b01-4a8c-a73b-59de4bfa76d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4c4f6ac4-8b01-4a8c-a73b-59de4bfa76d8] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004379013s
addons_test.go:584: (dbg) Run:  kubectl --context addons-916083 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-916083 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-916083 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-916083 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-916083 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-916083 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-916083 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [865aff2a-e8d2-4e5d-9f02-01507791cf5a] Pending
helpers_test.go:344: "task-pv-pod-restore" [865aff2a-e8d2-4e5d-9f02-01507791cf5a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [865aff2a-e8d2-4e5d-9f02-01507791cf5a] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003751786s
addons_test.go:626: (dbg) Run:  kubectl --context addons-916083 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-916083 delete pod task-pv-pod-restore: (1.095973794s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-916083 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-916083 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-916083 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.825692938s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-arm64 -p addons-916083 addons disable volumesnapshots --alsologtostderr -v=1: (1.13300275s)
--- PASS: TestAddons/parallel/CSI (62.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-916083 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-916083 --alsologtostderr -v=1: (1.523289137s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-776gp" [add08ce1-4504-4f8d-bfd5-664847022a49] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-776gp" [add08ce1-4504-4f8d-bfd5-664847022a49] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003660839s
--- PASS: TestAddons/parallel/Headlamp (10.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.86s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-916083 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-916083 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-916083 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1d543c57-f8ba-4c5e-ab39-b29e99623a36] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1d543c57-f8ba-4c5e-ab39-b29e99623a36] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1d543c57-f8ba-4c5e-ab39-b29e99623a36] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.0042832s
addons_test.go:891: (dbg) Run:  kubectl --context addons-916083 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 ssh "cat /opt/local-path-provisioner/pvc-ebcda5df-1519-4ca3-8350-f3873dc95050_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-916083 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-916083 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-916083 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-916083 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.489934593s)
--- PASS: TestAddons/parallel/LocalPath (53.86s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dj78p" [10888201-3bd5-457a-aa04-7bc6a2d2dc6a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003877186s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-916083
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-hqlhr" [ebc356b0-62a1-4747-8410-e3c50817c3f9] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003322812s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-916083 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-916083 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-916083
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-916083: (12.034259111s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-916083
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-916083
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-916083
--- PASS: TestAddons/StoppedEnableDisable (12.35s)

                                                
                                    
x
+
TestCertOptions (37.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-378989 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-378989 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.416080303s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-378989 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-378989 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-378989 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-378989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-378989
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-378989: (2.015446544s)
--- PASS: TestCertOptions (37.14s)

                                                
                                    
x
+
TestCertExpiration (234.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-625577 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-625577 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.758481671s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-625577 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-625577 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.07795928s)
helpers_test.go:175: Cleaning up "cert-expiration-625577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-625577
E0115 14:42:55.947104 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-625577: (5.787028946s)
--- PASS: TestCertExpiration (234.62s)

                                                
                                    
x
+
TestForceSystemdFlag (39.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-366089 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-366089 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.942956742s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-366089 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-366089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-366089
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-366089: (2.178693929s)
--- PASS: TestForceSystemdFlag (39.52s)

                                                
                                    
x
+
TestForceSystemdEnv (44.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-477206 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0115 14:38:51.056550 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-477206 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.644052611s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-477206 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-477206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-477206
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-477206: (2.275688698s)
--- PASS: TestForceSystemdEnv (44.31s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.52s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-313105 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-313105 --driver=docker  --container-runtime=containerd: (29.523788826s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-313105"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-313105": (1.381067381s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0Ey7IoWwwjcN/agent.4019362" SSH_AGENT_PID="4019363" DOCKER_HOST=ssh://docker@127.0.0.1:36444 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0Ey7IoWwwjcN/agent.4019362" SSH_AGENT_PID="4019363" DOCKER_HOST=ssh://docker@127.0.0.1:36444 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0Ey7IoWwwjcN/agent.4019362" SSH_AGENT_PID="4019363" DOCKER_HOST=ssh://docker@127.0.0.1:36444 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.28182154s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-0Ey7IoWwwjcN/agent.4019362" SSH_AGENT_PID="4019363" DOCKER_HOST=ssh://docker@127.0.0.1:36444 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-313105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-313105
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-313105: (1.976799024s)
--- PASS: TestDockerEnvContainerd (45.52s)

                                                
                                    
x
+
TestErrorSpam/setup (35.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-217945 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-217945 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-217945 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-217945 --driver=docker  --container-runtime=containerd: (35.606189772s)
--- PASS: TestErrorSpam/setup (35.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 stop: (1.266022618s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-217945 --log_dir /tmp/nospam-217945 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17957-3996034/.minikube/files/etc/test/nested/copy/4001369/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-arm64 start -p functional-672946 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0115 14:08:51.059631 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:51.065322 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:51.075617 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:51.095931 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:51.136252 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:51.216603 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:51.376892 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:51.697350 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:52.338205 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:53.618428 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:08:56.179351 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:09:01.300286 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-arm64 start -p functional-672946 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (55.893008532s)
--- PASS: TestFunctional/serial/StartWithProxy (55.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-672946 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-672946 --alsologtostderr -v=8: (6.25903537s)
functional_test.go:659: soft start took 6.263811301s for "functional-672946" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-672946 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 cache add registry.k8s.io/pause:3.1: (1.463486924s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 cache add registry.k8s.io/pause:3.3: (1.327221129s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cache add registry.k8s.io/pause:latest
E0115 14:09:11.541387 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 cache add registry.k8s.io/pause:latest: (1.243954091s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-672946 /tmp/TestFunctionalserialCacheCmdcacheadd_local1148182728/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cache add minikube-local-cache-test:functional-672946
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cache delete minikube-local-cache-test:functional-672946
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-672946
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (334.899118ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 cache reload: (1.187543069s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 kubectl -- --context functional-672946 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-672946 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-672946 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0115 14:09:32.021641 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-672946 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.571518035s)
functional_test.go:757: restart took 45.571609208s for "functional-672946" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-672946 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 logs: (1.807418082s)
--- PASS: TestFunctional/serial/LogsCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 logs --file /tmp/TestFunctionalserialLogsFileCmd3384098140/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 logs --file /tmp/TestFunctionalserialLogsFileCmd3384098140/001/logs.txt: (1.838446928s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.77s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-672946 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-672946
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-672946: exit status 115 (453.734291ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31084 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-672946 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-672946 delete -f testdata/invalidsvc.yaml: (1.078714406s)
--- PASS: TestFunctional/serial/InvalidService (4.77s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 config get cpus: exit status 14 (117.16337ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 config get cpus: exit status 14 (89.659402ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-672946 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-672946 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4033293: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-672946 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-672946 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (220.354127ms)

                                                
                                                
-- stdout --
	* [functional-672946] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 14:10:44.840030 4032829 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:10:44.840213 4032829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:10:44.840227 4032829 out.go:309] Setting ErrFile to fd 2...
	I0115 14:10:44.840235 4032829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:10:44.840518 4032829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:10:44.841294 4032829 out.go:303] Setting JSON to false
	I0115 14:10:44.842256 4032829 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":67988,"bootTime":1705259857,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:10:44.842333 4032829 start.go:138] virtualization:  
	I0115 14:10:44.845835 4032829 out.go:177] * [functional-672946] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 14:10:44.847890 4032829 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 14:10:44.851112 4032829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:10:44.848010 4032829 notify.go:220] Checking for updates...
	I0115 14:10:44.854652 4032829 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:10:44.856577 4032829 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:10:44.858517 4032829 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 14:10:44.860373 4032829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 14:10:44.863020 4032829 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:10:44.863708 4032829 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:10:44.889396 4032829 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:10:44.889549 4032829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:10:44.969026 4032829 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-15 14:10:44.958647664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:10:44.969122 4032829 docker.go:295] overlay module found
	I0115 14:10:44.971420 4032829 out.go:177] * Using the docker driver based on existing profile
	I0115 14:10:44.973390 4032829 start.go:298] selected driver: docker
	I0115 14:10:44.973423 4032829 start.go:902] validating driver "docker" against &{Name:functional-672946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-672946 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:10:44.973532 4032829 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 14:10:44.975822 4032829 out.go:177] 
	W0115 14:10:44.977612 4032829 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0115 14:10:44.979346 4032829 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-672946 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-672946 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-672946 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (261.725642ms)

                                                
                                                
-- stdout --
	* [functional-672946] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 14:10:44.573733 4032735 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:10:44.573979 4032735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:10:44.574007 4032735 out.go:309] Setting ErrFile to fd 2...
	I0115 14:10:44.574027 4032735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:10:44.575066 4032735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:10:44.575576 4032735 out.go:303] Setting JSON to false
	I0115 14:10:44.576656 4032735 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":67988,"bootTime":1705259857,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:10:44.576761 4032735 start.go:138] virtualization:  
	I0115 14:10:44.579274 4032735 out.go:177] * [functional-672946] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0115 14:10:44.581975 4032735 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 14:10:44.581953 4032735 notify.go:220] Checking for updates...
	I0115 14:10:44.585150 4032735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:10:44.587123 4032735 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:10:44.589011 4032735 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:10:44.592627 4032735 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 14:10:44.594683 4032735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 14:10:44.597004 4032735 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:10:44.597805 4032735 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:10:44.637725 4032735 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:10:44.637859 4032735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:10:44.747128 4032735 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2024-01-15 14:10:44.736972753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:10:44.747232 4032735 docker.go:295] overlay module found
	I0115 14:10:44.749835 4032735 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0115 14:10:44.751846 4032735 start.go:298] selected driver: docker
	I0115 14:10:44.751865 4032735 start.go:902] validating driver "docker" against &{Name:functional-672946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-672946 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0115 14:10:44.751984 4032735 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 14:10:44.754287 4032735 out.go:177] 
	W0115 14:10:44.756264 4032735 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0115 14:10:44.758194 4032735 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-672946 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-672946 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-mx9g7" [864a583f-c328-4d5e-afc8-482b022f9d23] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-mx9g7" [864a583f-c328-4d5e-afc8-482b022f9d23] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003818409s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30292
functional_test.go:1674: http://192.168.49.2:30292: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-mx9g7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30292
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1d2ac90a-dc50-43ba-8999-1230022418dd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0043134s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-672946 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-672946 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-672946 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-672946 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [084e6b8f-32ea-4277-b967-9d9577969711] Pending
helpers_test.go:344: "sp-pod" [084e6b8f-32ea-4277-b967-9d9577969711] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [084e6b8f-32ea-4277-b967-9d9577969711] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004516454s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-672946 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-672946 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-672946 delete -f testdata/storage-provisioner/pod.yaml: (1.752973155s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-672946 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a06543dc-01d1-4a71-a304-8aef675bf31f] Pending
helpers_test.go:344: "sp-pod" [a06543dc-01d1-4a71-a304-8aef675bf31f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004594104s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-672946 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh -n functional-672946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cp functional-672946:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2763838400/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh -n functional-672946 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh -n functional-672946 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/4001369/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo cat /etc/test/nested/copy/4001369/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/4001369.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo cat /etc/ssl/certs/4001369.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/4001369.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo cat /usr/share/ca-certificates/4001369.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/40013692.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo cat /etc/ssl/certs/40013692.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/40013692.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo cat /usr/share/ca-certificates/40013692.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-672946 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 ssh "sudo systemctl is-active docker": exit status 1 (362.529177ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 ssh "sudo systemctl is-active crio": exit status 1 (386.897413ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-672946 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-672946 tunnel --alsologtostderr]
E0115 14:10:12.982382 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-672946 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-672946 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4030392: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-672946 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-672946 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6dd1bb7a-4a1e-4aac-80f1-b934821a4e89] Pending
helpers_test.go:344: "nginx-svc" [6dd1bb7a-4a1e-4aac-80f1-b934821a4e89] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6dd1bb7a-4a1e-4aac-80f1-b934821a4e89] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004558035s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-672946 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.40.31 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-672946 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-672946 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-672946 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-s6g7g" [ddc91753-9137-47c0-b882-5957244a2489] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-s6g7g" [ddc91753-9137-47c0-b882-5957244a2489] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004439299s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "342.26091ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "74.319646ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "338.361992ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "74.227627ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdany-port3555053207/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1705327839166749611" to /tmp/TestFunctionalparallelMountCmdany-port3555053207/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1705327839166749611" to /tmp/TestFunctionalparallelMountCmdany-port3555053207/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1705327839166749611" to /tmp/TestFunctionalparallelMountCmdany-port3555053207/001/test-1705327839166749611
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (446.963186ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 15 14:10 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 15 14:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 15 14:10 test-1705327839166749611
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh cat /mount-9p/test-1705327839166749611
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-672946 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d486ef38-24f8-486d-abaa-b0cbc0d0819e] Pending
helpers_test.go:344: "busybox-mount" [d486ef38-24f8-486d-abaa-b0cbc0d0819e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d486ef38-24f8-486d-abaa-b0cbc0d0819e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d486ef38-24f8-486d-abaa-b0cbc0d0819e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003915488s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-672946 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdany-port3555053207/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 service list -o json
functional_test.go:1493: Took "681.213776ms" to run "out/minikube-linux-arm64 -p functional-672946 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30406
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30406
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdspecific-port758449337/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (663.844316ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdspecific-port758449337/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 ssh "sudo umount -f /mount-9p": exit status 1 (410.974679ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-672946 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdspecific-port758449337/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3722417994/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3722417994/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3722417994/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T" /mount1: exit status 1 (1.128551818s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-672946 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3722417994/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3722417994/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-672946 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3722417994/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 version -o=json --components: (1.35728028s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-672946 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-672946
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-672946 image ls --format short --alsologtostderr:
I0115 14:11:10.427726 4035198 out.go:296] Setting OutFile to fd 1 ...
I0115 14:11:10.427872 4035198 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:10.427898 4035198 out.go:309] Setting ErrFile to fd 2...
I0115 14:11:10.427904 4035198 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:10.428186 4035198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
I0115 14:11:10.428956 4035198 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:10.429150 4035198 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:10.429875 4035198 cli_runner.go:164] Run: docker container inspect functional-672946 --format={{.State.Status}}
I0115 14:11:10.458012 4035198 ssh_runner.go:195] Run: systemctl --version
I0115 14:11:10.458069 4035198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-672946
I0115 14:11:10.477275 4035198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36454 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/functional-672946/id_rsa Username:docker}
I0115 14:11:10.588877 4035198 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-672946 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-672946  | sha256:87d406 | 1.01kB |
| docker.io/library/nginx                     | alpine             | sha256:74077e | 17.6MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | latest             | sha256:6c7be4 | 67.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-672946 image ls --format table --alsologtostderr:
I0115 14:11:10.782683 4035262 out.go:296] Setting OutFile to fd 1 ...
I0115 14:11:10.782918 4035262 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:10.782930 4035262 out.go:309] Setting ErrFile to fd 2...
I0115 14:11:10.782936 4035262 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:10.783207 4035262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
I0115 14:11:10.784015 4035262 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:10.784217 4035262 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:10.784720 4035262 cli_runner.go:164] Run: docker container inspect functional-672946 --format={{.State.Status}}
I0115 14:11:10.813672 4035262 ssh_runner.go:195] Run: systemctl --version
I0115 14:11:10.813770 4035262 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-672946
I0115 14:11:10.837815 4035262 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36454 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/functional-672946/id_rsa Username:docker}
I0115 14:11:10.941035 4035262 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-672946 image ls --format json --alsologtostderr:
[{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d1873
2f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448","repoDigests":["docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17610338"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:1
8eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c
05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:87d406ecf4b8fad619944ac2e43a4c6a24fc80ff8686005508fe02093f77fe2c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-672946"],"size":"1007"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1
.28.4"],"size":"22001357"},{"id":"sha256:6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353","repoDigests":["docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"67219073"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-672946 image ls --format json --alsologtostderr:
I0115 14:11:10.728327 4035256 out.go:296] Setting OutFile to fd 1 ...
I0115 14:11:10.728527 4035256 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:10.728556 4035256 out.go:309] Setting ErrFile to fd 2...
I0115 14:11:10.728576 4035256 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:10.728880 4035256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
I0115 14:11:10.729644 4035256 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:10.729897 4035256 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:10.731567 4035256 cli_runner.go:164] Run: docker container inspect functional-672946 --format={{.State.Status}}
I0115 14:11:10.771630 4035256 ssh_runner.go:195] Run: systemctl --version
I0115 14:11:10.771683 4035256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-672946
I0115 14:11:10.793617 4035256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36454 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/functional-672946/id_rsa Username:docker}
I0115 14:11:10.892949 4035256 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-672946 image ls --format yaml --alsologtostderr:
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:87d406ecf4b8fad619944ac2e43a4c6a24fc80ff8686005508fe02093f77fe2c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-672946
size: "1007"
- id: sha256:6c7be49d2a11cfab9a87362ad27d447b45931e43dfa6919a8e1398ec09c1e353
repoDigests:
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "67219073"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:74077e780ec714353793e0ef5677b55d7396aa1d31e77ec899f54842f7142448
repoDigests:
- docker.io/library/nginx@sha256:a59278fd22a9d411121e190b8cec8aa57b306aa3332459197777583beb728f59
repoTags:
- docker.io/library/nginx:alpine
size: "17610338"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-672946 image ls --format yaml --alsologtostderr:
I0115 14:11:10.461761 4035199 out.go:296] Setting OutFile to fd 1 ...
I0115 14:11:10.461956 4035199 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:10.461980 4035199 out.go:309] Setting ErrFile to fd 2...
I0115 14:11:10.462000 4035199 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:10.462333 4035199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
I0115 14:11:10.462993 4035199 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:10.463294 4035199 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:10.463831 4035199 cli_runner.go:164] Run: docker container inspect functional-672946 --format={{.State.Status}}
I0115 14:11:10.491048 4035199 ssh_runner.go:195] Run: systemctl --version
I0115 14:11:10.492933 4035199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-672946
I0115 14:11:10.516460 4035199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36454 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/functional-672946/id_rsa Username:docker}
I0115 14:11:10.616610 4035199 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-672946 ssh pgrep buildkitd: exit status 1 (316.937755ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image build -t localhost/my-image:functional-672946 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-672946 image build -t localhost/my-image:functional-672946 testdata/build --alsologtostderr: (2.1150739s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-672946 image build -t localhost/my-image:functional-672946 testdata/build --alsologtostderr:
I0115 14:11:11.335640 4035359 out.go:296] Setting OutFile to fd 1 ...
I0115 14:11:11.336710 4035359 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:11.336725 4035359 out.go:309] Setting ErrFile to fd 2...
I0115 14:11:11.336731 4035359 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0115 14:11:11.337062 4035359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
I0115 14:11:11.337852 4035359 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:11.339996 4035359 config.go:182] Loaded profile config "functional-672946": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0115 14:11:11.340570 4035359 cli_runner.go:164] Run: docker container inspect functional-672946 --format={{.State.Status}}
I0115 14:11:11.359036 4035359 ssh_runner.go:195] Run: systemctl --version
I0115 14:11:11.359091 4035359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-672946
I0115 14:11:11.376806 4035359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36454 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/functional-672946/id_rsa Username:docker}
I0115 14:11:11.477132 4035359 build_images.go:151] Building image from path: /tmp/build.1596686454.tar
I0115 14:11:11.477206 4035359 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0115 14:11:11.489071 4035359 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1596686454.tar
I0115 14:11:11.494237 4035359 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1596686454.tar: stat -c "%s %y" /var/lib/minikube/build/build.1596686454.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1596686454.tar': No such file or directory
I0115 14:11:11.494267 4035359 ssh_runner.go:362] scp /tmp/build.1596686454.tar --> /var/lib/minikube/build/build.1596686454.tar (3072 bytes)
I0115 14:11:11.523031 4035359 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1596686454
I0115 14:11:11.533416 4035359 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1596686454 -xf /var/lib/minikube/build/build.1596686454.tar
I0115 14:11:11.544460 4035359 containerd.go:379] Building image: /var/lib/minikube/build/build.1596686454
I0115 14:11:11.544556 4035359 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1596686454 --local dockerfile=/var/lib/minikube/build/build.1596686454 --output type=image,name=localhost/my-image:functional-672946
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:3b84f977bed96a26bceccb526d0277a31afbb62af5df53ceae60ae9420a0c8a1 0.0s done
#8 exporting config sha256:9836522cc4e249082bdd46d4a5e7e6b0b38ca23620660886eb626ab2318eae6b
#8 exporting config sha256:9836522cc4e249082bdd46d4a5e7e6b0b38ca23620660886eb626ab2318eae6b 0.0s done
#8 naming to localhost/my-image:functional-672946 done
#8 DONE 0.1s
I0115 14:11:13.347340 4035359 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1596686454 --local dockerfile=/var/lib/minikube/build/build.1596686454 --output type=image,name=localhost/my-image:functional-672946: (1.802750029s)
I0115 14:11:13.347414 4035359 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1596686454
I0115 14:11:13.358529 4035359 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1596686454.tar
I0115 14:11:13.369307 4035359 build_images.go:207] Built localhost/my-image:functional-672946 from /tmp/build.1596686454.tar
I0115 14:11:13.369337 4035359 build_images.go:123] succeeded building to: functional-672946
I0115 14:11:13.369342 4035359 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/01/15 14:10:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.685740132s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-672946
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image rm gcr.io/google-containers/addon-resizer:functional-672946 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-672946
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-672946 image save --daemon gcr.io/google-containers/addon-resizer:functional-672946 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-672946
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-672946
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-672946
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-672946
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (90.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-062316 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0115 14:11:34.902516 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-062316 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m30.294333033s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (90.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.43s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-062316 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-062316 addons enable ingress --alsologtostderr -v=5: (8.430006729s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-062316 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.33s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-306348 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0115 14:13:51.056864 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:14:18.743370 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-306348 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m2.325310244s)
--- PASS: TestJSONOutput/start/Command (62.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.83s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-306348 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.83s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-306348 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-306348 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-306348 --output=json --user=testUser: (5.87099888s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-003792 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-003792 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.019956ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7fbf57cc-aacf-4b81-8d13-0e69ed6a1e55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-003792] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aff67fa5-c004-4509-ba17-c8f509e8a699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17957"}}
	{"specversion":"1.0","id":"2701f385-4b3a-4c6a-9ed2-65602f831e51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aed312e3-0de8-42ac-867d-d8476597a182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig"}}
	{"specversion":"1.0","id":"6beeb032-e1de-46ea-9034-585499d94cbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube"}}
	{"specversion":"1.0","id":"b8725d73-853c-41c1-adff-4adbde34e25a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"39606be6-0a79-40fa-88fa-58991588cba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"44414c6b-7c30-4c24-83d9-68e39a46d7cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-003792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-003792
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-622658 --network=
E0115 14:15:13.652469 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:13.657758 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:13.667994 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:13.688261 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:13.728526 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:13.809397 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:13.969726 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:14.290783 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:14.931636 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:16.212439 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:18.774149 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:23.894779 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:15:34.134927 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-622658 --network=: (41.770857356s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-622658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-622658
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-622658: (2.132360613s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-171920 --network=bridge
E0115 14:15:54.615349 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-171920 --network=bridge: (33.715304365s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-171920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-171920
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-171920: (2.007733538s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.75s)

                                                
                                    
x
+
TestKicExistingNetwork (35.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-487066 --network=existing-network
E0115 14:16:35.575583 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-487066 --network=existing-network: (32.972656862s)
helpers_test.go:175: Cleaning up "existing-network-487066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-487066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-487066: (1.92993492s)
--- PASS: TestKicExistingNetwork (35.06s)

                                                
                                    
x
+
TestKicCustomSubnet (36.97s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-834813 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-834813 --subnet=192.168.60.0/24: (34.835619801s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-834813 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-834813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-834813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-834813: (2.113145044s)
--- PASS: TestKicCustomSubnet (36.97s)

                                                
                                    
x
+
TestKicStaticIP (35.12s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-413741 --static-ip=192.168.200.200
E0115 14:17:55.947406 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:55.952644 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:55.962878 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:55.983130 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:56.023378 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:56.103656 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:56.264320 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:56.584821 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:57.225722 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:17:57.496017 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:17:58.505970 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:18:01.066572 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:18:06.187221 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-413741 --static-ip=192.168.200.200: (32.939774651s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-413741 ip
helpers_test.go:175: Cleaning up "static-ip-413741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-413741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-413741: (2.005303288s)
--- PASS: TestKicStaticIP (35.12s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-171329 --driver=docker  --container-runtime=containerd
E0115 14:18:16.427804 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:18:36.908021 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-171329 --driver=docker  --container-runtime=containerd: (30.033666435s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-173980 --driver=docker  --container-runtime=containerd
E0115 14:18:51.056584 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-173980 --driver=docker  --container-runtime=containerd: (34.17493736s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-171329
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-173980
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-173980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-173980
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-173980: (2.01401079s)
helpers_test.go:175: Cleaning up "first-171329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-171329
E0115 14:19:17.868429 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-171329: (2.245060055s)
--- PASS: TestMinikubeProfile (69.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-289367 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-289367 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.384135909s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-289367 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-291156 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-291156 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.503399113s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-291156 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-289367 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-289367 --alsologtostderr -v=5: (1.655319046s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-291156 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-291156
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-291156: (1.231100942s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-291156
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-291156: (6.625459789s)
--- PASS: TestMountStart/serial/RestartStopped (7.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-291156 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719657 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0115 14:20:13.652433 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:20:39.789151 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:20:41.337106 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-719657 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.445520863s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-719657 -- rollout status deployment/busybox: (3.745738089s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-pqq2v -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-xsb5m -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-pqq2v -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-xsb5m -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-pqq2v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-xsb5m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-pqq2v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-pqq2v -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-xsb5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-719657 -- exec busybox-5bc68d56bd-xsb5m -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-719657 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-719657 -v 3 --alsologtostderr: (17.213037561s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-719657 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp testdata/cp-test.txt multinode-719657:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2746346629/001/cp-test_multinode-719657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657:/home/docker/cp-test.txt multinode-719657-m02:/home/docker/cp-test_multinode-719657_multinode-719657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m02 "sudo cat /home/docker/cp-test_multinode-719657_multinode-719657-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657:/home/docker/cp-test.txt multinode-719657-m03:/home/docker/cp-test_multinode-719657_multinode-719657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m03 "sudo cat /home/docker/cp-test_multinode-719657_multinode-719657-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp testdata/cp-test.txt multinode-719657-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2746346629/001/cp-test_multinode-719657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657-m02:/home/docker/cp-test.txt multinode-719657:/home/docker/cp-test_multinode-719657-m02_multinode-719657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657 "sudo cat /home/docker/cp-test_multinode-719657-m02_multinode-719657.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657-m02:/home/docker/cp-test.txt multinode-719657-m03:/home/docker/cp-test_multinode-719657-m02_multinode-719657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m03 "sudo cat /home/docker/cp-test_multinode-719657-m02_multinode-719657-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp testdata/cp-test.txt multinode-719657-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2746346629/001/cp-test_multinode-719657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657-m03:/home/docker/cp-test.txt multinode-719657:/home/docker/cp-test_multinode-719657-m03_multinode-719657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657 "sudo cat /home/docker/cp-test_multinode-719657-m03_multinode-719657.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 cp multinode-719657-m03:/home/docker/cp-test.txt multinode-719657-m02:/home/docker/cp-test_multinode-719657-m03_multinode-719657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 ssh -n multinode-719657-m02 "sudo cat /home/docker/cp-test_multinode-719657-m03_multinode-719657-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-719657 node stop m03: (1.249928122s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-719657 status: exit status 7 (612.685947ms)

                                                
                                                
-- stdout --
	multinode-719657
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-719657-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-719657-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-719657 status --alsologtostderr: exit status 7 (574.588774ms)

                                                
                                                
-- stdout --
	multinode-719657
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-719657-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-719657-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 14:21:45.467674 4082745 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:21:45.467873 4082745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:21:45.467885 4082745 out.go:309] Setting ErrFile to fd 2...
	I0115 14:21:45.467892 4082745 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:21:45.468214 4082745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:21:45.468454 4082745 out.go:303] Setting JSON to false
	I0115 14:21:45.468570 4082745 mustload.go:65] Loading cluster: multinode-719657
	I0115 14:21:45.468670 4082745 notify.go:220] Checking for updates...
	I0115 14:21:45.469120 4082745 config.go:182] Loaded profile config "multinode-719657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:21:45.469138 4082745 status.go:255] checking status of multinode-719657 ...
	I0115 14:21:45.470026 4082745 cli_runner.go:164] Run: docker container inspect multinode-719657 --format={{.State.Status}}
	I0115 14:21:45.492463 4082745 status.go:330] multinode-719657 host status = "Running" (err=<nil>)
	I0115 14:21:45.492500 4082745 host.go:66] Checking if "multinode-719657" exists ...
	I0115 14:21:45.492840 4082745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-719657
	I0115 14:21:45.511208 4082745 host.go:66] Checking if "multinode-719657" exists ...
	I0115 14:21:45.511627 4082745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 14:21:45.511677 4082745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-719657
	I0115 14:21:45.530243 4082745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36519 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/multinode-719657/id_rsa Username:docker}
	I0115 14:21:45.625839 4082745 ssh_runner.go:195] Run: systemctl --version
	I0115 14:21:45.631286 4082745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 14:21:45.644406 4082745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:21:45.728855 4082745 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2024-01-15 14:21:45.719107223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:21:45.729454 4082745 kubeconfig.go:92] found "multinode-719657" server: "https://192.168.58.2:8443"
	I0115 14:21:45.729478 4082745 api_server.go:166] Checking apiserver status ...
	I0115 14:21:45.729520 4082745 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0115 14:21:45.744016 4082745 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1306/cgroup
	I0115 14:21:45.755370 4082745 api_server.go:182] apiserver freezer: "4:freezer:/docker/41b98b826107496c0dcfb1f12fb9b440afe745ca570a0a36e3dbb4c1d0e57de8/kubepods/burstable/pod08199b1e7360e490e11d3f0ee071c236/7223be212d4c17ab0db3c1d232d9aa48cc3cd537b5176d9727454bac2834098d"
	I0115 14:21:45.755467 4082745 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/41b98b826107496c0dcfb1f12fb9b440afe745ca570a0a36e3dbb4c1d0e57de8/kubepods/burstable/pod08199b1e7360e490e11d3f0ee071c236/7223be212d4c17ab0db3c1d232d9aa48cc3cd537b5176d9727454bac2834098d/freezer.state
	I0115 14:21:45.765695 4082745 api_server.go:204] freezer state: "THAWED"
	I0115 14:21:45.765722 4082745 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0115 14:21:45.774536 4082745 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0115 14:21:45.774569 4082745 status.go:421] multinode-719657 apiserver status = Running (err=<nil>)
	I0115 14:21:45.774582 4082745 status.go:257] multinode-719657 status: &{Name:multinode-719657 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 14:21:45.774613 4082745 status.go:255] checking status of multinode-719657-m02 ...
	I0115 14:21:45.774916 4082745 cli_runner.go:164] Run: docker container inspect multinode-719657-m02 --format={{.State.Status}}
	I0115 14:21:45.792947 4082745 status.go:330] multinode-719657-m02 host status = "Running" (err=<nil>)
	I0115 14:21:45.792977 4082745 host.go:66] Checking if "multinode-719657-m02" exists ...
	I0115 14:21:45.793273 4082745 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-719657-m02
	I0115 14:21:45.811932 4082745 host.go:66] Checking if "multinode-719657-m02" exists ...
	I0115 14:21:45.812236 4082745 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0115 14:21:45.812288 4082745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-719657-m02
	I0115 14:21:45.832018 4082745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36524 SSHKeyPath:/home/jenkins/minikube-integration/17957-3996034/.minikube/machines/multinode-719657-m02/id_rsa Username:docker}
	I0115 14:21:45.930131 4082745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0115 14:21:45.944119 4082745 status.go:257] multinode-719657-m02 status: &{Name:multinode-719657-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0115 14:21:45.944153 4082745 status.go:255] checking status of multinode-719657-m03 ...
	I0115 14:21:45.944533 4082745 cli_runner.go:164] Run: docker container inspect multinode-719657-m03 --format={{.State.Status}}
	I0115 14:21:45.966052 4082745 status.go:330] multinode-719657-m03 host status = "Stopped" (err=<nil>)
	I0115 14:21:45.966076 4082745 status.go:343] host is not running, skipping remaining checks
	I0115 14:21:45.966084 4082745 status.go:257] multinode-719657-m03 status: &{Name:multinode-719657-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-719657 node start m03 --alsologtostderr: (11.458528635s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-719657
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-719657
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-719657: (25.088133274s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719657 --wait=true -v=8 --alsologtostderr
E0115 14:22:55.946792 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:23:23.629346 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:23:51.056209 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-719657 --wait=true -v=8 --alsologtostderr: (1m32.578862166s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-719657
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-719657 node delete m03: (4.438120635s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-719657 stop: (23.905828449s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-719657 status: exit status 7 (105.147057ms)

                                                
                                                
-- stdout --
	multinode-719657
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-719657-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-719657 status --alsologtostderr: exit status 7 (101.91784ms)

                                                
                                                
-- stdout --
	multinode-719657
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-719657-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 14:24:25.377118 4091587 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:24:25.377338 4091587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:24:25.377368 4091587 out.go:309] Setting ErrFile to fd 2...
	I0115 14:24:25.377388 4091587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:24:25.377654 4091587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:24:25.377872 4091587 out.go:303] Setting JSON to false
	I0115 14:24:25.377988 4091587 mustload.go:65] Loading cluster: multinode-719657
	I0115 14:24:25.378044 4091587 notify.go:220] Checking for updates...
	I0115 14:24:25.378473 4091587 config.go:182] Loaded profile config "multinode-719657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:24:25.378516 4091587 status.go:255] checking status of multinode-719657 ...
	I0115 14:24:25.379392 4091587 cli_runner.go:164] Run: docker container inspect multinode-719657 --format={{.State.Status}}
	I0115 14:24:25.398745 4091587 status.go:330] multinode-719657 host status = "Stopped" (err=<nil>)
	I0115 14:24:25.398767 4091587 status.go:343] host is not running, skipping remaining checks
	I0115 14:24:25.398774 4091587 status.go:257] multinode-719657 status: &{Name:multinode-719657 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0115 14:24:25.398802 4091587 status.go:255] checking status of multinode-719657-m02 ...
	I0115 14:24:25.399103 4091587 cli_runner.go:164] Run: docker container inspect multinode-719657-m02 --format={{.State.Status}}
	I0115 14:24:25.416275 4091587 status.go:330] multinode-719657-m02 host status = "Stopped" (err=<nil>)
	I0115 14:24:25.416316 4091587 status.go:343] host is not running, skipping remaining checks
	I0115 14:24:25.416324 4091587 status.go:257] multinode-719657-m02 status: &{Name:multinode-719657-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719657 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0115 14:25:13.651765 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:25:14.104262 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-719657 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.919645286s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-719657 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-719657
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719657-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-719657-m02 --driver=docker  --container-runtime=containerd: exit status 14 (99.442077ms)

                                                
                                                
-- stdout --
	* [multinode-719657-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-719657-m02' is duplicated with machine name 'multinode-719657-m02' in profile 'multinode-719657'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-719657-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-719657-m03 --driver=docker  --container-runtime=containerd: (34.011398559s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-719657
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-719657: exit status 80 (365.315494ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-719657
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-719657-m03 already exists in multinode-719657-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-719657-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-719657-m03: (2.064009363s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.60s)

                                                
                                    
x
+
TestPreload (156.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-103516 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-103516 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m9.479509374s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-103516 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-103516 image pull gcr.io/k8s-minikube/busybox: (1.312886897s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-103516
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-103516: (12.041750662s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-103516 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0115 14:27:55.947554 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:28:51.056619 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-103516 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m11.25175839s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-103516 image list
helpers_test.go:175: Cleaning up "test-preload-103516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-103516
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-103516: (2.374259766s)
--- PASS: TestPreload (156.71s)

                                                
                                    
x
+
TestScheduledStopUnix (104.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-952645 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-952645 --memory=2048 --driver=docker  --container-runtime=containerd: (27.971300475s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-952645 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-952645 -n scheduled-stop-952645
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-952645 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-952645 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-952645 -n scheduled-stop-952645
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-952645
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-952645 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0115 14:30:13.652440 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-952645
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-952645: exit status 7 (85.497946ms)

                                                
                                                
-- stdout --
	scheduled-stop-952645
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-952645 -n scheduled-stop-952645
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-952645 -n scheduled-stop-952645: exit status 7 (84.032691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-952645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-952645
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-952645: (4.708720989s)
--- PASS: TestScheduledStopUnix (104.44s)

                                                
                                    
x
+
TestInsufficientStorage (10.81s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-023474 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-023474 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.258445774s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7a4d6a18-5fc1-4ca6-a7b2-93fc48a8bc8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-023474] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a4a8bbf-7a52-424a-8c38-5b4879b73e5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17957"}}
	{"specversion":"1.0","id":"8a98f572-b674-48e3-9e38-0433ae0bfc89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"42921a2b-1151-4371-b59b-61580e7264bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig"}}
	{"specversion":"1.0","id":"0631b052-a143-49ac-9fba-c673ff0312c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube"}}
	{"specversion":"1.0","id":"24607be0-c795-4396-ab1c-4e573fd7ab71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9ccd52cf-7893-41b8-a577-7dfc05f94200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"12278778-fa32-4f70-b6d7-feefb4ed48fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"eba6ec0c-5fd6-4562-8157-9900eed98e0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c4759821-398c-42de-a06b-6ff42bb42425","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"120c6ee3-05ac-4977-a308-01eed6613f9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2890da84-a411-4d54-ad89-7f9458fcb796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-023474 in cluster insufficient-storage-023474","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1935406-ed7d-4767-b2ae-602bc9d8f502","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"aee260ad-9bdf-4af9-9783-29238da142e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"31e30abd-7296-4928-933a-15a02a919be0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-023474 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-023474 --output=json --layout=cluster: exit status 7 (317.6113ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-023474","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-023474","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 14:30:54.390109 4108761 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-023474" does not appear in /home/jenkins/minikube-integration/17957-3996034/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-023474 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-023474 --output=json --layout=cluster: exit status 7 (325.236855ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-023474","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-023474","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0115 14:30:54.715660 4108814 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-023474" does not appear in /home/jenkins/minikube-integration/17957-3996034/kubeconfig
	E0115 14:30:54.727368 4108814 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/insufficient-storage-023474/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-023474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-023474
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-023474: (1.908377457s)
--- PASS: TestInsufficientStorage (10.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2473005965 start -p running-upgrade-394257 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2473005965 start -p running-upgrade-394257 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.397542387s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-394257 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-394257 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.490052984s)
helpers_test.go:175: Cleaning up "running-upgrade-394257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-394257
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-394257: (2.544837848s)
--- PASS: TestRunningBinaryUpgrade (81.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (376.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-852585 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-852585 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.205812361s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-852585
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-852585: (3.456412315s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-852585 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-852585 status --format={{.Host}}: exit status 7 (142.780317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-852585 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-852585 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m41.481227887s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-852585 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-852585 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-852585 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (126.711644ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-852585] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-852585
	    minikube start -p kubernetes-upgrade-852585 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8525852 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-852585 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-852585 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-852585 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.599869511s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-852585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-852585
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-852585: (3.099812585s)
--- PASS: TestKubernetesUpgrade (376.22s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.13s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.27235717 start -p missing-upgrade-771473 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.27235717 start -p missing-upgrade-771473 --memory=2200 --driver=docker  --container-runtime=containerd: (1m22.721397629s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-771473
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-771473
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-771473 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0115 14:32:55.947059 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-771473 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m15.627787525s)
helpers_test.go:175: Cleaning up "missing-upgrade-771473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-771473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-771473: (2.706287721s)
--- PASS: TestMissingContainerUpgrade (164.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-101761 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-101761 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (90.918681ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-101761] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-101761 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-101761 --driver=docker  --container-runtime=containerd: (37.636036888s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-101761 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-101761 --no-kubernetes --driver=docker  --container-runtime=containerd
E0115 14:31:36.698131 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-101761 --no-kubernetes --driver=docker  --container-runtime=containerd: (17.099788335s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-101761 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-101761 status -o json: exit status 2 (334.728194ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-101761","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-101761
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-101761: (1.929421336s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-101761 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-101761 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.35883988s)
--- PASS: TestNoKubernetes/serial/Start (6.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-101761 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-101761 "sudo systemctl is-active --quiet service kubelet": exit status 1 (384.937158ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-101761
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-101761: (1.296122048s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-101761 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-101761 --driver=docker  --container-runtime=containerd: (8.320860403s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-101761 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-101761 "sudo systemctl is-active --quiet service kubelet": exit status 1 (534.988882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1465190963 start -p stopped-upgrade-397482 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0115 14:33:51.056525 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:34:18.991350 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1465190963 start -p stopped-upgrade-397482 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.004882367s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1465190963 -p stopped-upgrade-397482 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1465190963 -p stopped-upgrade-397482 stop: (19.974488272s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-397482 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0115 14:35:13.652223 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-397482 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.010123367s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-397482
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-397482: (1.122742671s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestPause/serial/Start (58.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-955781 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0115 14:37:55.947020 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-955781 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (58.782865319s)
--- PASS: TestPause/serial/Start (58.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-955781 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-955781 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.35516327s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.37s)

                                                
                                    
x
+
TestPause/serial/Pause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-955781 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-955781 --alsologtostderr -v=5: (1.067461876s)
--- PASS: TestPause/serial/Pause (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-955781 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-955781 --output=json --layout=cluster: exit status 2 (483.167941ms)

                                                
                                                
-- stdout --
	{"Name":"pause-955781","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-955781","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.48s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-955781 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-955781 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-955781 --alsologtostderr -v=5: (1.103974345s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.27s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-955781 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-955781 --alsologtostderr -v=5: (3.27476421s)
--- PASS: TestPause/serial/DeletePaused (3.27s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (12.88s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (12.780902511s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-955781
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-955781: exit status 1 (38.664793ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-955781: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (12.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-010883 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-010883 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (280.757146ms)

                                                
                                                
-- stdout --
	* [false-010883] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17957
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0115 14:38:36.370229 4146456 out.go:296] Setting OutFile to fd 1 ...
	I0115 14:38:36.370587 4146456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:38:36.370596 4146456 out.go:309] Setting ErrFile to fd 2...
	I0115 14:38:36.370602 4146456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0115 14:38:36.370853 4146456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17957-3996034/.minikube/bin
	I0115 14:38:36.371308 4146456 out.go:303] Setting JSON to false
	I0115 14:38:36.372184 4146456 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":69660,"bootTime":1705259857,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0115 14:38:36.372253 4146456 start.go:138] virtualization:  
	I0115 14:38:36.375058 4146456 out.go:177] * [false-010883] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0115 14:38:36.377432 4146456 out.go:177]   - MINIKUBE_LOCATION=17957
	I0115 14:38:36.379357 4146456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0115 14:38:36.377523 4146456 notify.go:220] Checking for updates...
	I0115 14:38:36.381258 4146456 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17957-3996034/kubeconfig
	I0115 14:38:36.383286 4146456 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17957-3996034/.minikube
	I0115 14:38:36.385375 4146456 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0115 14:38:36.387190 4146456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0115 14:38:36.389782 4146456 config.go:182] Loaded profile config "force-systemd-flag-366089": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0115 14:38:36.389900 4146456 driver.go:392] Setting default libvirt URI to qemu:///system
	I0115 14:38:36.418360 4146456 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I0115 14:38:36.418555 4146456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0115 14:38:36.548530 4146456 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2024-01-15 14:38:36.534937667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:a1496014c916f9e62104b33d1bb5bd03b0858e59 Expected:a1496014c916f9e62104b33d1bb5bd03b0858e59} RuncCommit:{ID:v1.1.11-0-g4bccb38 Expected:v1.1.11-0-g4bccb38} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0115 14:38:36.548632 4146456 docker.go:295] overlay module found
	I0115 14:38:36.560816 4146456 out.go:177] * Using the docker driver based on user configuration
	I0115 14:38:36.562539 4146456 start.go:298] selected driver: docker
	I0115 14:38:36.562551 4146456 start.go:902] validating driver "docker" against <nil>
	I0115 14:38:36.562564 4146456 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0115 14:38:36.564775 4146456 out.go:177] 
	W0115 14:38:36.566653 4146456 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0115 14:38:36.568553 4146456 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-010883 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-010883" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-010883

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-010883"

                                                
                                                
----------------------- debugLogs end: false-010883 [took: 4.756251215s] --------------------------------
helpers_test.go:175: Cleaning up "false-010883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-010883
--- PASS: TestNetworkPlugins/group/false (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-673114 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0115 14:40:13.651857 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:41:54.104475 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-673114 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m8.470111676s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-673114 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [99d3d352-3fc6-4694-b31e-521b50d64aca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [99d3d352-3fc6-4694-b31e-521b50d64aca] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002681358s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-673114 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-673114 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-673114 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-673114 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-673114 --alsologtostderr -v=3: (12.135922773s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-673114 -n old-k8s-version-673114
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-673114 -n old-k8s-version-673114: exit status 7 (90.486546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-673114 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (647.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-673114 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-673114 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (10m46.886512486s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-673114 -n old-k8s-version-673114
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (647.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-693361 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 14:43:51.056446 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-693361 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m8.102867854s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-693361 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1fe259cc-67d6-4755-83e4-6e44b3da5405] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1fe259cc-67d6-4755-83e4-6e44b3da5405] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003905705s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-693361 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-693361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-693361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.034252599s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-693361 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-693361 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-693361 --alsologtostderr -v=3: (12.1373678s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-693361 -n no-preload-693361
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-693361 -n no-preload-693361: exit status 7 (86.345138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-693361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (337.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-693361 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 14:45:13.652456 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:47:55.946847 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:48:16.698358 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:48:51.056115 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-693361 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m37.284940711s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-693361 -n no-preload-693361
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (337.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d96m9" [d9233d2a-4a2d-4e50-9dcb-b02aeb7fbbe3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d96m9" [d9233d2a-4a2d-4e50-9dcb-b02aeb7fbbe3] Running
E0115 14:50:13.652393 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.004091237s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-d96m9" [d9233d2a-4a2d-4e50-9dcb-b02aeb7fbbe3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003862183s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-693361 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-693361 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-693361 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-693361 -n no-preload-693361
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-693361 -n no-preload-693361: exit status 2 (360.027602ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-693361 -n no-preload-693361
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-693361 -n no-preload-693361: exit status 2 (361.608823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-693361 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-693361 -n no-preload-693361
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-693361 -n no-preload-693361
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-758635 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 14:50:58.992162 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-758635 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (58.270730147s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-758635 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [045abdc1-7703-44fa-8a42-9a677f09e5b6] Pending
helpers_test.go:344: "busybox" [045abdc1-7703-44fa-8a42-9a677f09e5b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [045abdc1-7703-44fa-8a42-9a677f09e5b6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00400173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-758635 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-758635 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-758635 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.07974303s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-758635 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-758635 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-758635 --alsologtostderr -v=3: (12.111017889s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-758635 -n embed-certs-758635
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-758635 -n embed-certs-758635: exit status 7 (88.406958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-758635 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (340.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-758635 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 14:52:55.947130 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-758635 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m40.069235374s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-758635 -n embed-certs-758635
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (340.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-sv7z4" [b23b6b47-0db6-4613-a14e-0056a5cd4819] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004334731s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-sv7z4" [b23b6b47-0db6-4613-a14e-0056a5cd4819] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00370872s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-673114 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-673114 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-673114 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-673114 -n old-k8s-version-673114
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-673114 -n old-k8s-version-673114: exit status 2 (380.720173ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-673114 -n old-k8s-version-673114
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-673114 -n old-k8s-version-673114: exit status 2 (364.201014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-673114 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-673114 -n old-k8s-version-673114
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-673114 -n old-k8s-version-673114
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-122653 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 14:53:51.056306 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:54:05.743625 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:05.748865 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:05.759113 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:05.779372 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:05.819631 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:05.899923 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:06.060389 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:06.380857 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:07.021588 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:08.302124 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:10.863341 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:15.984030 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:54:26.225195 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-122653 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (58.140163555s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (58.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-122653 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cde99f4a-559b-409f-a4a8-02e465836543] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0115 14:54:46.705725 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cde99f4a-559b-409f-a4a8-02e465836543] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003239911s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-122653 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-122653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-122653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105166103s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-122653 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-122653 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-122653 --alsologtostderr -v=3: (12.155832967s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653: exit status 7 (90.780512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-122653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (347.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-122653 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0115 14:55:13.652307 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 14:55:27.666073 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:56:49.586778 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 14:57:17.676995 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:17.682628 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:17.692957 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:17.713192 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:17.754084 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:17.834327 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:17.995446 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:18.316135 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:18.956774 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:20.237722 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:22.798249 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:57:27.918690 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-122653 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m46.645991572s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (347.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zksm8" [0c52245f-b7b2-4645-8255-d5c2e28b8862] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zksm8" [0c52245f-b7b2-4645-8255-d5c2e28b8862] Running
E0115 14:57:38.158924 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.00366548s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-zksm8" [0c52245f-b7b2-4645-8255-d5c2e28b8862] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004492606s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-758635 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-758635 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-758635 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-758635 -n embed-certs-758635
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-758635 -n embed-certs-758635: exit status 2 (379.501424ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-758635 -n embed-certs-758635
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-758635 -n embed-certs-758635: exit status 2 (377.488386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-758635 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-758635 -n embed-certs-758635
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-758635 -n embed-certs-758635
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-004933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 14:57:55.946844 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
E0115 14:57:58.639365 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
E0115 14:58:34.104659 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:58:39.599546 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-004933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (47.420377486s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-004933 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-004933 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.18372254s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-004933 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-004933 --alsologtostderr -v=3: (1.297483201s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-004933 -n newest-cni-004933
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-004933 -n newest-cni-004933: exit status 7 (100.696687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-004933 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-004933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0115 14:58:51.056257 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/addons-916083/client.crt: no such file or directory
E0115 14:59:05.743537 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-004933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (31.012901991s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-004933 -n newest-cni-004933
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-004933 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-004933 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-004933 -n newest-cni-004933
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-004933 -n newest-cni-004933: exit status 2 (406.575742ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-004933 -n newest-cni-004933
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-004933 -n newest-cni-004933: exit status 2 (369.724797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-004933 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-004933 -n newest-cni-004933
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-004933 -n newest-cni-004933
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0115 14:59:33.427178 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/no-preload-693361/client.crt: no such file or directory
E0115 15:00:01.520732 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (49.695059835s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-010883 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-010883 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7kkn6" [d9ac44ea-e919-4360-8316-2d745243261f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7kkn6" [d9ac44ea-e919-4360-8316-2d745243261f] Running
E0115 15:00:13.652104 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004196871s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-010883 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m3.963073714s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lqv8p" [a6341840-d10e-4682-b46b-39fbc69644d1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lqv8p" [a6341840-d10e-4682-b46b-39fbc69644d1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.004429635s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lqv8p" [a6341840-d10e-4682-b46b-39fbc69644d1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004152167s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-122653 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-122653 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-122653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-122653 --alsologtostderr -v=1: (1.169565546s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653: exit status 2 (388.760753ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653: exit status 2 (373.357474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-122653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-122653 --alsologtostderr -v=1: (1.009724089s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-122653 -n default-k8s-diff-port-122653
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.40s)
E0115 15:06:05.658290 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:06:32.163688 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
E0115 15:06:48.939590 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
E0115 15:06:48.944888 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
E0115 15:06:48.955148 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
E0115 15:06:48.975392 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
E0115 15:06:49.015740 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
E0115 15:06:49.096235 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
E0115 15:06:49.256748 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
E0115 15:06:49.577012 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
E0115 15:06:50.217648 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m14.053056428s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r629f" [a677500c-7e98-4901-a6f6-81698741403a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004004184s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-010883 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-010883 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4w6dv" [53c9cdef-6e81-426f-8443-f563dbcc59e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4w6dv" [53c9cdef-6e81-426f-8443-f563dbcc59e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003856849s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-010883 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m2.274192365s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lsl7k" [3670d40e-213a-4587-a360-ce2707c0218c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005817157s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-010883 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-010883 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s5xpq" [30e5cab8-93ff-49bf-8c61-10756b22541a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 15:02:45.360950 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/old-k8s-version-673114/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-s5xpq" [30e5cab8-93ff-49bf-8c61-10756b22541a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003723197s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-010883 exec deployment/netcat -- nslookup kubernetes.default
E0115 15:02:55.947120 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/ingress-addon-legacy-062316/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m28.063227829s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-010883 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-010883 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ljtll" [4afe2803-00e4-496f-b0cc-c72b2393c321] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ljtll" [4afe2803-00e4-496f-b0cc-c72b2393c321] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005307549s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-010883 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0115 15:04:43.723832 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:43.729102 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:43.739323 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:43.759610 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:43.799848 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:43.880086 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:44.040407 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:44.360855 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:45.013544 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:46.294433 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
E0115 15:04:48.855182 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (57.608223022s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-010883 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-010883 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8k2xb" [bd2c9807-9851-4ac5-9698-9689141ccecd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 15:04:53.975354 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/default-k8s-diff-port-122653/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-8k2xb" [bd2c9807-9851-4ac5-9698-9689141ccecd] Running
E0115 15:04:56.699371 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004473611s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-010883 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
E0115 15:05:10.318579 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
helpers_test.go:344: "kube-flannel-ds-4l5h7" [46c071b7-6334-4ea0-b006-8ca60b9729c3] Running
E0115 15:05:10.398819 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
E0115 15:05:10.559167 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
E0115 15:05:10.880288 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
E0115 15:05:11.520764 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
E0115 15:05:12.801195 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
E0115 15:05:13.652004 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/functional-672946/client.crt: no such file or directory
E0115 15:05:15.361606 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00450816s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-010883 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-010883 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h7q92" [099e141b-7a5a-4f31-be4a-fb4f6e6ff84e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 15:05:20.482504 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/auto-010883/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-h7q92" [099e141b-7a5a-4f31-be4a-fb4f6e6ff84e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003851148s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-010883 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m26.070394181s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-010883 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-010883 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-010883 replace --force -f testdata/netcat-deployment.yaml
E0115 15:06:51.498583 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ghcvt" [1b22308c-fc6b-4bd5-b383-4b6ea1bbc57b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0115 15:06:54.059541 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ghcvt" [1b22308c-fc6b-4bd5-b383-4b6ea1bbc57b] Running
E0115 15:06:59.180319 4001369 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17957-3996034/.minikube/profiles/kindnet-010883/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003828615s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-010883 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-010883 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/320)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-152127 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-152127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-152127
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1786: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-499748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-499748
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-010883 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-010883" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-010883

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-010883"

                                                
                                                
----------------------- debugLogs end: kubenet-010883 [took: 5.671906146s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-010883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-010883
--- SKIP: TestNetworkPlugins/group/kubenet (5.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-010883 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-010883" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-010883

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-010883" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-010883"

                                                
                                                
----------------------- debugLogs end: cilium-010883 [took: 5.604602143s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-010883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-010883
--- SKIP: TestNetworkPlugins/group/cilium (5.92s)

                                                
                                    
Copied to clipboard