Test Report: Docker_Linux_docker_arm64 17719

                    
                      e08a2828f2be3e524baaf41342316dad88935561:2023-12-07:32188
                    
                

Test fail (4/330)

Order failed test Duration
35 TestAddons/parallel/Ingress 39.46
174 TestIngressAddonLegacy/serial/ValidateIngressAddons 65.65
272 TestStoppedBinaryUpgrade/Upgrade 447.15
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.41
x
+
TestAddons/parallel/Ingress (39.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-946218 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-946218 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-946218 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e5c19369-f547-463f-9bca-72592cfa8081] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e5c19369-f547-463f-9bca-72592cfa8081] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.017678841s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-946218 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.056261302s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-946218 addons disable ingress-dns --alsologtostderr -v=1: (1.314558899s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-946218 addons disable ingress --alsologtostderr -v=1: (7.708580512s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-946218
helpers_test.go:235: (dbg) docker inspect addons-946218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4063cc8e22dc4d243949f18baf15d11eebf2eadd93529a9a1ef302a27f6379fb",
	        "Created": "2023-12-07T20:02:14.422683765Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-07T20:02:14.778990063Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:769b0b780370d646693e9d8a4170c38d193d2f33565406ee9066915c40e406d4",
	        "ResolvConfPath": "/var/lib/docker/containers/4063cc8e22dc4d243949f18baf15d11eebf2eadd93529a9a1ef302a27f6379fb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4063cc8e22dc4d243949f18baf15d11eebf2eadd93529a9a1ef302a27f6379fb/hostname",
	        "HostsPath": "/var/lib/docker/containers/4063cc8e22dc4d243949f18baf15d11eebf2eadd93529a9a1ef302a27f6379fb/hosts",
	        "LogPath": "/var/lib/docker/containers/4063cc8e22dc4d243949f18baf15d11eebf2eadd93529a9a1ef302a27f6379fb/4063cc8e22dc4d243949f18baf15d11eebf2eadd93529a9a1ef302a27f6379fb-json.log",
	        "Name": "/addons-946218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-946218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-946218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d31a8d799f33954b837d7af69167c943d49d340ae3cf3e44f45d2d95d64afaf2-init/diff:/var/lib/docker/overlay2/baac1057f1861dfdebb7423d9d7ad7a05f930e41cec62cfa33740325cb982d86/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d31a8d799f33954b837d7af69167c943d49d340ae3cf3e44f45d2d95d64afaf2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d31a8d799f33954b837d7af69167c943d49d340ae3cf3e44f45d2d95d64afaf2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d31a8d799f33954b837d7af69167c943d49d340ae3cf3e44f45d2d95d64afaf2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-946218",
	                "Source": "/var/lib/docker/volumes/addons-946218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-946218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-946218",
	                "name.minikube.sigs.k8s.io": "addons-946218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a4eee3660c6fb02b5b5c447050b03765b57083007e8c5c7d189ee2c6410d7f4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3a4eee3660c6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-946218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4063cc8e22dc",
	                        "addons-946218"
	                    ],
	                    "NetworkID": "77b4e7bcb5771c3826c1a3f678a8465e2b8793781e244510ab152207e578ee18",
	                    "EndpointID": "7bb83d5527c5428cf81001632da41f8ca7bacda0dd638bed9cc42cd12f448290",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-946218 -n addons-946218
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-946218 logs -n 25: (1.174493456s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-482552   | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |                     |
	|         | -p download-only-482552                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-482552   | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |                     |
	|         | -p download-only-482552                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-482552   | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |                     |
	|         | -p download-only-482552                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC | 07 Dec 23 20:01 UTC |
	| delete  | -p download-only-482552                                                                     | download-only-482552   | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC | 07 Dec 23 20:01 UTC |
	| delete  | -p download-only-482552                                                                     | download-only-482552   | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC | 07 Dec 23 20:01 UTC |
	| start   | --download-only -p                                                                          | download-docker-220646 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |                     |
	|         | download-docker-220646                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-220646                                                                   | download-docker-220646 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC | 07 Dec 23 20:01 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-439095   | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |                     |
	|         | binary-mirror-439095                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35697                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-439095                                                                     | binary-mirror-439095   | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC | 07 Dec 23 20:01 UTC |
	| addons  | enable dashboard -p                                                                         | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |                     |
	|         | addons-946218                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |                     |
	|         | addons-946218                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-946218 --wait=true                                                                | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC | 07 Dec 23 20:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-946218 ip                                                                            | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:04 UTC | 07 Dec 23 20:04 UTC |
	| addons  | addons-946218 addons disable                                                                | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:04 UTC | 07 Dec 23 20:04 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-946218 addons                                                                        | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:04 UTC | 07 Dec 23 20:04 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:04 UTC | 07 Dec 23 20:04 UTC |
	|         | addons-946218                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-946218 ssh curl -s                                                                   | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:04 UTC | 07 Dec 23 20:04 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-946218 ip                                                                            | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:05 UTC | 07 Dec 23 20:05 UTC |
	| addons  | addons-946218 addons                                                                        | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:05 UTC | 07 Dec 23 20:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-946218 addons                                                                        | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:05 UTC | 07 Dec 23 20:05 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-946218 addons disable                                                                | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:05 UTC | 07 Dec 23 20:05 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-946218 addons disable                                                                | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:05 UTC | 07 Dec 23 20:05 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-946218 ssh cat                                                                       | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:05 UTC | 07 Dec 23 20:05 UTC |
	|         | /opt/local-path-provisioner/pvc-6224022a-bf0c-43f9-b398-1fc2163a085b_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-946218 addons disable                                                                | addons-946218          | jenkins | v1.32.0 | 07 Dec 23 20:05 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:01:50
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:01:50.911042    8179 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:01:50.911181    8179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:50.911188    8179 out.go:309] Setting ErrFile to fd 2...
	I1207 20:01:50.911194    8179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:50.911443    8179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	I1207 20:01:50.911888    8179 out.go:303] Setting JSON to false
	I1207 20:01:50.912619    8179 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":2654,"bootTime":1701976657,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1207 20:01:50.912688    8179 start.go:138] virtualization:  
	I1207 20:01:50.915347    8179 out.go:177] * [addons-946218] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1207 20:01:50.917207    8179 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:01:50.919178    8179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:01:50.917312    8179 notify.go:220] Checking for updates...
	I1207 20:01:50.923232    8179 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:01:50.925256    8179 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	I1207 20:01:50.926706    8179 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1207 20:01:50.928573    8179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:01:50.930464    8179 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:01:50.954408    8179 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1207 20:01:50.954522    8179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:01:51.037651    8179 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-07 20:01:51.027163355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:01:51.037758    8179 docker.go:295] overlay module found
	I1207 20:01:51.039791    8179 out.go:177] * Using the docker driver based on user configuration
	I1207 20:01:51.041311    8179 start.go:298] selected driver: docker
	I1207 20:01:51.041331    8179 start.go:902] validating driver "docker" against <nil>
	I1207 20:01:51.041345    8179 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:01:51.042062    8179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:01:51.117603    8179 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-07 20:01:51.107644708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:01:51.117774    8179 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 20:01:51.118020    8179 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 20:01:51.119812    8179 out.go:177] * Using Docker driver with root privileges
	I1207 20:01:51.121644    8179 cni.go:84] Creating CNI manager for ""
	I1207 20:01:51.121678    8179 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 20:01:51.121690    8179 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 20:01:51.121706    8179 start_flags.go:323] config:
	{Name:addons-946218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-946218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:01:51.124104    8179 out.go:177] * Starting control plane node addons-946218 in cluster addons-946218
	I1207 20:01:51.125801    8179 cache.go:121] Beginning downloading kic base image for docker with docker
	I1207 20:01:51.127340    8179 out.go:177] * Pulling base image ...
	I1207 20:01:51.128951    8179 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 20:01:51.129006    8179 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 20:01:51.129030    8179 cache.go:56] Caching tarball of preloaded images
	I1207 20:01:51.129054    8179 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local docker daemon
	I1207 20:01:51.129112    8179 preload.go:174] Found /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 20:01:51.129123    8179 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 20:01:51.129476    8179 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/config.json ...
	I1207 20:01:51.129505    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/config.json: {Name:mk65069aa441c08498f81925dabee829bb67bbf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:01:51.149899    8179 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c to local cache
	I1207 20:01:51.150027    8179 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local cache directory
	I1207 20:01:51.150048    8179 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local cache directory, skipping pull
	I1207 20:01:51.150054    8179 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c exists in cache, skipping pull
	I1207 20:01:51.150062    8179 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c as a tarball
	I1207 20:01:51.150068    8179 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c from local cache
	I1207 20:02:07.120398    8179 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c from cached tarball
	I1207 20:02:07.120440    8179 cache.go:194] Successfully downloaded all kic artifacts
	I1207 20:02:07.120492    8179 start.go:365] acquiring machines lock for addons-946218: {Name:mk3e8ea9a98cede806ec856689b0f5b5eaa03b4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:02:07.120623    8179 start.go:369] acquired machines lock for "addons-946218" in 109.168µs
	I1207 20:02:07.120657    8179 start.go:93] Provisioning new machine with config: &{Name:addons-946218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-946218 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 20:02:07.120756    8179 start.go:125] createHost starting for "" (driver="docker")
	I1207 20:02:07.122937    8179 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1207 20:02:07.123185    8179 start.go:159] libmachine.API.Create for "addons-946218" (driver="docker")
	I1207 20:02:07.123216    8179 client.go:168] LocalClient.Create starting
	I1207 20:02:07.123359    8179 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem
	I1207 20:02:07.498270    8179 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem
	I1207 20:02:08.130332    8179 cli_runner.go:164] Run: docker network inspect addons-946218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 20:02:08.147431    8179 cli_runner.go:211] docker network inspect addons-946218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 20:02:08.147512    8179 network_create.go:281] running [docker network inspect addons-946218] to gather additional debugging logs...
	I1207 20:02:08.147533    8179 cli_runner.go:164] Run: docker network inspect addons-946218
	W1207 20:02:08.165776    8179 cli_runner.go:211] docker network inspect addons-946218 returned with exit code 1
	I1207 20:02:08.165801    8179 network_create.go:284] error running [docker network inspect addons-946218]: docker network inspect addons-946218: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-946218 not found
	I1207 20:02:08.165814    8179 network_create.go:286] output of [docker network inspect addons-946218]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-946218 not found
	
	** /stderr **
	I1207 20:02:08.165906    8179 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 20:02:08.184977    8179 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024feea0}
	I1207 20:02:08.185022    8179 network_create.go:124] attempt to create docker network addons-946218 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1207 20:02:08.185085    8179 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-946218 addons-946218
	I1207 20:02:08.255850    8179 network_create.go:108] docker network addons-946218 192.168.49.0/24 created
	I1207 20:02:08.255891    8179 kic.go:121] calculated static IP "192.168.49.2" for the "addons-946218" container
	I1207 20:02:08.255968    8179 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 20:02:08.272787    8179 cli_runner.go:164] Run: docker volume create addons-946218 --label name.minikube.sigs.k8s.io=addons-946218 --label created_by.minikube.sigs.k8s.io=true
	I1207 20:02:08.291574    8179 oci.go:103] Successfully created a docker volume addons-946218
	I1207 20:02:08.291663    8179 cli_runner.go:164] Run: docker run --rm --name addons-946218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-946218 --entrypoint /usr/bin/test -v addons-946218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c -d /var/lib
	I1207 20:02:10.319625    8179 cli_runner.go:217] Completed: docker run --rm --name addons-946218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-946218 --entrypoint /usr/bin/test -v addons-946218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c -d /var/lib: (2.027922891s)
	I1207 20:02:10.319654    8179 oci.go:107] Successfully prepared a docker volume addons-946218
	I1207 20:02:10.319692    8179 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 20:02:10.319716    8179 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 20:02:10.319798    8179 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-946218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 20:02:14.337286    8179 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-946218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c -I lz4 -xf /preloaded.tar -C /extractDir: (4.017446428s)
	I1207 20:02:14.337316    8179 kic.go:203] duration metric: took 4.017598 seconds to extract preloaded images to volume
	W1207 20:02:14.337517    8179 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1207 20:02:14.337632    8179 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 20:02:14.406577    8179 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-946218 --name addons-946218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-946218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-946218 --network addons-946218 --ip 192.168.49.2 --volume addons-946218:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c
	I1207 20:02:14.788026    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Running}}
	I1207 20:02:14.809075    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:14.833186    8179 cli_runner.go:164] Run: docker exec addons-946218 stat /var/lib/dpkg/alternatives/iptables
	I1207 20:02:14.901028    8179 oci.go:144] the created container "addons-946218" has a running status.
	I1207 20:02:14.901053    8179 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa...
	I1207 20:02:15.616907    8179 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 20:02:15.646402    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:15.671961    8179 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 20:02:15.671980    8179 kic_runner.go:114] Args: [docker exec --privileged addons-946218 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 20:02:15.749039    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:15.780976    8179 machine.go:88] provisioning docker machine ...
	I1207 20:02:15.781004    8179 ubuntu.go:169] provisioning hostname "addons-946218"
	I1207 20:02:15.781068    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:15.810069    8179 main.go:141] libmachine: Using SSH client type: native
	I1207 20:02:15.810485    8179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1207 20:02:15.810497    8179 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-946218 && echo "addons-946218" | sudo tee /etc/hostname
	I1207 20:02:15.986914    8179 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-946218
	
	I1207 20:02:15.986986    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:16.014428    8179 main.go:141] libmachine: Using SSH client type: native
	I1207 20:02:16.014834    8179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1207 20:02:16.014853    8179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-946218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-946218/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-946218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:02:16.149914    8179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:02:16.149948    8179 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17719-2292/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-2292/.minikube}
	I1207 20:02:16.149974    8179 ubuntu.go:177] setting up certificates
	I1207 20:02:16.149981    8179 provision.go:83] configureAuth start
	I1207 20:02:16.150038    8179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-946218
	I1207 20:02:16.169311    8179 provision.go:138] copyHostCerts
	I1207 20:02:16.169392    8179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/ca.pem (1078 bytes)
	I1207 20:02:16.169523    8179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/cert.pem (1123 bytes)
	I1207 20:02:16.169596    8179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/key.pem (1679 bytes)
	I1207 20:02:16.169657    8179 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem org=jenkins.addons-946218 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-946218]
	I1207 20:02:16.907931    8179 provision.go:172] copyRemoteCerts
	I1207 20:02:16.908023    8179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:02:16.908065    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:16.925598    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:17.019243    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1207 20:02:17.047500    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1207 20:02:17.074397    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 20:02:17.101962    8179 provision.go:86] duration metric: configureAuth took 951.96757ms
	I1207 20:02:17.101987    8179 ubuntu.go:193] setting minikube options for container-runtime
	I1207 20:02:17.102171    8179 config.go:182] Loaded profile config "addons-946218": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 20:02:17.102227    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:17.120182    8179 main.go:141] libmachine: Using SSH client type: native
	I1207 20:02:17.120578    8179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1207 20:02:17.120594    8179 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1207 20:02:17.246397    8179 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1207 20:02:17.246419    8179 ubuntu.go:71] root file system type: overlay
	I1207 20:02:17.246524    8179 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1207 20:02:17.246613    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:17.265802    8179 main.go:141] libmachine: Using SSH client type: native
	I1207 20:02:17.266213    8179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1207 20:02:17.266301    8179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1207 20:02:17.407760    8179 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1207 20:02:17.407848    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:17.427552    8179 main.go:141] libmachine: Using SSH client type: native
	I1207 20:02:17.427975    8179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1207 20:02:17.427994    8179 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1207 20:02:18.263783    8179 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-07 20:02:17.404474703 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1207 20:02:18.263816    8179 machine.go:91] provisioned docker machine in 2.482822833s
	I1207 20:02:18.263828    8179 client.go:171] LocalClient.Create took 11.140602132s
	I1207 20:02:18.263840    8179 start.go:167] duration metric: libmachine.API.Create for "addons-946218" took 11.140655269s
	I1207 20:02:18.263847    8179 start.go:300] post-start starting for "addons-946218" (driver="docker")
	I1207 20:02:18.263856    8179 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:02:18.263921    8179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:02:18.263968    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:18.282561    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:18.375747    8179 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:02:18.379856    8179 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 20:02:18.379893    8179 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1207 20:02:18.379905    8179 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1207 20:02:18.379912    8179 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1207 20:02:18.379924    8179 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-2292/.minikube/addons for local assets ...
	I1207 20:02:18.380001    8179 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-2292/.minikube/files for local assets ...
	I1207 20:02:18.380030    8179 start.go:303] post-start completed in 116.177631ms
	I1207 20:02:18.380328    8179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-946218
	I1207 20:02:18.398380    8179 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/config.json ...
	I1207 20:02:18.398658    8179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 20:02:18.398707    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:18.416869    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:18.506730    8179 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 20:02:18.512471    8179 start.go:128] duration metric: createHost completed in 11.391696166s
	I1207 20:02:18.512496    8179 start.go:83] releasing machines lock for "addons-946218", held for 11.391858224s
	I1207 20:02:18.512563    8179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-946218
	I1207 20:02:18.534650    8179 ssh_runner.go:195] Run: cat /version.json
	I1207 20:02:18.534708    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:18.534953    8179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:02:18.535014    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:18.562640    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:18.574238    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:18.779594    8179 ssh_runner.go:195] Run: systemctl --version
	I1207 20:02:18.785255    8179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1207 20:02:18.790921    8179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1207 20:02:18.822176    8179 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1207 20:02:18.822259    8179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 20:02:18.857621    8179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1207 20:02:18.857688    8179 start.go:475] detecting cgroup driver to use...
	I1207 20:02:18.857727    8179 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1207 20:02:18.857844    8179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:02:18.877629    8179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1207 20:02:18.889551    8179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 20:02:18.901430    8179 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1207 20:02:18.901538    8179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1207 20:02:18.913289    8179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 20:02:18.924952    8179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 20:02:18.936795    8179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 20:02:18.948979    8179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:02:18.960222    8179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 20:02:18.972286    8179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:02:18.982904    8179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:02:18.993210    8179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:02:19.090376    8179 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 20:02:19.209266    8179 start.go:475] detecting cgroup driver to use...
	I1207 20:02:19.209320    8179 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1207 20:02:19.209391    8179 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1207 20:02:19.227314    8179 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1207 20:02:19.227392    8179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 20:02:19.243046    8179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:02:19.264725    8179 ssh_runner.go:195] Run: which cri-dockerd
	I1207 20:02:19.269966    8179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1207 20:02:19.281382    8179 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1207 20:02:19.305959    8179 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1207 20:02:19.420400    8179 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1207 20:02:19.532828    8179 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1207 20:02:19.532958    8179 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1207 20:02:19.555845    8179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:02:19.659381    8179 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 20:02:19.945509    8179 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 20:02:20.047016    8179 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1207 20:02:20.152061    8179 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 20:02:20.265329    8179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:02:20.369873    8179 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1207 20:02:20.386401    8179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:02:20.479121    8179 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1207 20:02:20.565710    8179 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1207 20:02:20.565869    8179 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1207 20:02:20.571517    8179 start.go:543] Will wait 60s for crictl version
	I1207 20:02:20.571624    8179 ssh_runner.go:195] Run: which crictl
	I1207 20:02:20.576310    8179 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:02:20.631331    8179 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1207 20:02:20.631469    8179 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 20:02:20.658222    8179 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 20:02:20.693007    8179 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1207 20:02:20.693137    8179 cli_runner.go:164] Run: docker network inspect addons-946218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 20:02:20.711671    8179 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 20:02:20.717395    8179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:02:20.730459    8179 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 20:02:20.730534    8179 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 20:02:20.750806    8179 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1207 20:02:20.750828    8179 docker.go:601] Images already preloaded, skipping extraction
	I1207 20:02:20.750893    8179 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 20:02:20.770859    8179 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1207 20:02:20.770879    8179 cache_images.go:84] Images are preloaded, skipping loading
	I1207 20:02:20.770936    8179 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1207 20:02:20.837307    8179 cni.go:84] Creating CNI manager for ""
	I1207 20:02:20.837332    8179 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 20:02:20.837363    8179 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:02:20.837382    8179 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-946218 NodeName:addons-946218 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:02:20.837520    8179 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-946218"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:02:20.837580    8179 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-946218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-946218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:02:20.837645    8179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 20:02:20.848265    8179 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:02:20.848335    8179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 20:02:20.858677    8179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 20:02:20.879501    8179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 20:02:20.900528    8179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1207 20:02:20.923940    8179 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 20:02:20.929099    8179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:02:20.942761    8179 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218 for IP: 192.168.49.2
	I1207 20:02:20.942791    8179 certs.go:190] acquiring lock for shared ca certs: {Name:mkf0aeb9e21068cbc2b0de52461bf1fef9a8e437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:20.942964    8179 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key
	I1207 20:02:21.380265    8179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt ...
	I1207 20:02:21.380296    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt: {Name:mk7ceea1db1bd78d8ff3cd83388dfb039d50578e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:21.380526    8179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key ...
	I1207 20:02:21.380540    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key: {Name:mkc3769f3f69c6769c51828d730055861365cdd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:21.380667    8179 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key
	I1207 20:02:22.095716    8179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.crt ...
	I1207 20:02:22.095747    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.crt: {Name:mk696882be0b9995b623d2694c88b096a79390bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:22.095930    8179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key ...
	I1207 20:02:22.095942    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key: {Name:mk48e3e2b08aef7f4d025a730522ceaed8b9f0fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:22.096063    8179 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.key
	I1207 20:02:22.096082    8179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt with IP's: []
	I1207 20:02:22.997954    8179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt ...
	I1207 20:02:22.997985    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: {Name:mk9ccbb531a4a93d5f2c07b671553d9c617c73f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:22.998170    8179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.key ...
	I1207 20:02:22.998183    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.key: {Name:mk4681cc26d44551b42b7fc6818bbc6210d48707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:22.998261    8179 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.key.dd3b5fb2
	I1207 20:02:22.998282    8179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 20:02:23.233452    8179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.crt.dd3b5fb2 ...
	I1207 20:02:23.233481    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.crt.dd3b5fb2: {Name:mkcd59d7086ebabbdb32ca5bbf6fc57296fe3fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:23.233656    8179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.key.dd3b5fb2 ...
	I1207 20:02:23.233670    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.key.dd3b5fb2: {Name:mke22dec4fec3cb0156b8f784fed6b2c7207d372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:23.233751    8179 certs.go:337] copying /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.crt
	I1207 20:02:23.233828    8179 certs.go:341] copying /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.key
	I1207 20:02:23.233877    8179 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/proxy-client.key
	I1207 20:02:23.233895    8179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/proxy-client.crt with IP's: []
	I1207 20:02:23.834707    8179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/proxy-client.crt ...
	I1207 20:02:23.834737    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/proxy-client.crt: {Name:mk879aecf0537d0eb357029d6bec319cf746a098 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:23.834914    8179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/proxy-client.key ...
	I1207 20:02:23.834925    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/proxy-client.key: {Name:mk04ea429673542cd5a29ed3d278e8c240fadb62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:23.835119    8179 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 20:02:23.835159    8179 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem (1078 bytes)
	I1207 20:02:23.835189    8179 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:02:23.835224    8179 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem (1679 bytes)
	I1207 20:02:23.835862    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 20:02:23.865274    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 20:02:23.892902    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 20:02:23.920516    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 20:02:23.948664    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:02:23.975980    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 20:02:24.005605    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:02:24.036946    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:02:24.067904    8179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:02:24.098948    8179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 20:02:24.121826    8179 ssh_runner.go:195] Run: openssl version
	I1207 20:02:24.129227    8179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:02:24.142052    8179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:02:24.147107    8179 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:02:24.147224    8179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:02:24.157744    8179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:02:24.169925    8179 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:02:24.174611    8179 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:02:24.174714    8179 kubeadm.go:404] StartCluster: {Name:addons-946218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-946218 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:02:24.174868    8179 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 20:02:24.196638    8179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 20:02:24.207710    8179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 20:02:24.218532    8179 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1207 20:02:24.218622    8179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 20:02:24.229151    8179 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:02:24.229199    8179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 20:02:24.279389    8179 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 20:02:24.279618    8179 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 20:02:24.337918    8179 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1207 20:02:24.337987    8179 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1207 20:02:24.338025    8179 kubeadm.go:322] OS: Linux
	I1207 20:02:24.338072    8179 kubeadm.go:322] CGROUPS_CPU: enabled
	I1207 20:02:24.338122    8179 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1207 20:02:24.338170    8179 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1207 20:02:24.338219    8179 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1207 20:02:24.338271    8179 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1207 20:02:24.338328    8179 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1207 20:02:24.338375    8179 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1207 20:02:24.338424    8179 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1207 20:02:24.338472    8179 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1207 20:02:24.415348    8179 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 20:02:24.415455    8179 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 20:02:24.415546    8179 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 20:02:24.756552    8179 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 20:02:24.759274    8179 out.go:204]   - Generating certificates and keys ...
	I1207 20:02:24.759398    8179 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 20:02:24.759466    8179 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 20:02:26.344397    8179 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 20:02:26.737039    8179 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 20:02:26.989631    8179 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 20:02:27.640224    8179 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 20:02:28.090802    8179 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 20:02:28.090951    8179 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-946218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 20:02:28.853386    8179 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 20:02:28.853772    8179 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-946218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 20:02:29.103437    8179 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 20:02:29.692360    8179 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 20:02:30.391858    8179 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 20:02:30.392165    8179 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 20:02:30.662054    8179 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 20:02:31.349247    8179 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 20:02:31.810084    8179 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 20:02:31.977168    8179 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 20:02:31.977943    8179 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 20:02:31.980762    8179 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 20:02:31.983512    8179 out.go:204]   - Booting up control plane ...
	I1207 20:02:31.983642    8179 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 20:02:31.983715    8179 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 20:02:31.983785    8179 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 20:02:32.000369    8179 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:02:32.005226    8179 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:02:32.005280    8179 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 20:02:32.121913    8179 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 20:02:41.124212    8179 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002319 seconds
	I1207 20:02:41.124505    8179 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 20:02:41.139604    8179 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 20:02:41.663151    8179 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 20:02:41.663494    8179 kubeadm.go:322] [mark-control-plane] Marking the node addons-946218 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 20:02:42.180999    8179 kubeadm.go:322] [bootstrap-token] Using token: czr00f.6ft80texim7ay2mo
	I1207 20:02:42.184115    8179 out.go:204]   - Configuring RBAC rules ...
	I1207 20:02:42.184253    8179 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 20:02:42.191094    8179 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 20:02:42.201910    8179 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 20:02:42.208223    8179 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 20:02:42.213081    8179 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 20:02:42.227199    8179 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 20:02:42.244850    8179 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 20:02:42.498721    8179 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 20:02:42.599656    8179 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 20:02:42.601296    8179 kubeadm.go:322] 
	I1207 20:02:42.601364    8179 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 20:02:42.601370    8179 kubeadm.go:322] 
	I1207 20:02:42.601441    8179 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 20:02:42.601446    8179 kubeadm.go:322] 
	I1207 20:02:42.601470    8179 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 20:02:42.602196    8179 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 20:02:42.602250    8179 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 20:02:42.602255    8179 kubeadm.go:322] 
	I1207 20:02:42.602314    8179 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 20:02:42.602320    8179 kubeadm.go:322] 
	I1207 20:02:42.602364    8179 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 20:02:42.602369    8179 kubeadm.go:322] 
	I1207 20:02:42.602417    8179 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 20:02:42.602500    8179 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 20:02:42.602564    8179 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 20:02:42.602569    8179 kubeadm.go:322] 
	I1207 20:02:42.602958    8179 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 20:02:42.603036    8179 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 20:02:42.603041    8179 kubeadm.go:322] 
	I1207 20:02:42.603385    8179 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token czr00f.6ft80texim7ay2mo \
	I1207 20:02:42.603486    8179 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bf03bebb018fea717c072634f3af28c80686bb1a7a8d0c481a3a9bb717d143b1 \
	I1207 20:02:42.603757    8179 kubeadm.go:322] 	--control-plane 
	I1207 20:02:42.603769    8179 kubeadm.go:322] 
	I1207 20:02:42.604127    8179 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 20:02:42.604137    8179 kubeadm.go:322] 
	I1207 20:02:42.605112    8179 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token czr00f.6ft80texim7ay2mo \
	I1207 20:02:42.605499    8179 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bf03bebb018fea717c072634f3af28c80686bb1a7a8d0c481a3a9bb717d143b1 
	I1207 20:02:42.608806    8179 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1207 20:02:42.608917    8179 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 20:02:42.608936    8179 cni.go:84] Creating CNI manager for ""
	I1207 20:02:42.608952    8179 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 20:02:42.611019    8179 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 20:02:42.612655    8179 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 20:02:42.643919    8179 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 20:02:42.689707    8179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 20:02:42.689829    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:42.689914    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=addons-946218 minikube.k8s.io/updated_at=2023_12_07T20_02_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:43.019442    8179 ops.go:34] apiserver oom_adj: -16
	I1207 20:02:43.019536    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:43.116861    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:43.711218    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:44.211311    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:44.710674    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:45.211529    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:45.711382    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:46.210649    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:46.711192    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:47.210710    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:47.711296    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:48.210661    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:48.711398    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:49.210703    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:49.711088    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:50.211641    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:50.711167    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:51.210848    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:51.711423    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:52.211572    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:52.711162    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:53.211197    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:53.710638    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:54.211550    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:54.710963    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:55.210623    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:55.711208    8179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:02:55.819316    8179 kubeadm.go:1088] duration metric: took 13.129530032s to wait for elevateKubeSystemPrivileges.
	I1207 20:02:55.819342    8179 kubeadm.go:406] StartCluster complete in 31.64463978s
	I1207 20:02:55.819357    8179 settings.go:142] acquiring lock: {Name:mk4e1ad85078db32f53ce2cb878f95b1dc79d720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:55.819467    8179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:02:55.820154    8179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/kubeconfig: {Name:mkb58bbc3586feb84db8c4c89653a5136ccfc407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:02:55.820449    8179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 20:02:55.820847    8179 config.go:182] Loaded profile config "addons-946218": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 20:02:55.821030    8179 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1207 20:02:55.821148    8179 addons.go:69] Setting volumesnapshots=true in profile "addons-946218"
	I1207 20:02:55.821164    8179 addons.go:231] Setting addon volumesnapshots=true in "addons-946218"
	I1207 20:02:55.821223    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.821777    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.822939    8179 addons.go:69] Setting inspektor-gadget=true in profile "addons-946218"
	I1207 20:02:55.822962    8179 addons.go:231] Setting addon inspektor-gadget=true in "addons-946218"
	I1207 20:02:55.822996    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.823491    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.825594    8179 addons.go:69] Setting cloud-spanner=true in profile "addons-946218"
	I1207 20:02:55.825623    8179 addons.go:231] Setting addon cloud-spanner=true in "addons-946218"
	I1207 20:02:55.825668    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.826163    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.826408    8179 addons.go:69] Setting metrics-server=true in profile "addons-946218"
	I1207 20:02:55.826426    8179 addons.go:231] Setting addon metrics-server=true in "addons-946218"
	I1207 20:02:55.826467    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.826943    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.837649    8179 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-946218"
	I1207 20:02:55.837704    8179 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-946218"
	I1207 20:02:55.837753    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.838276    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.843915    8179 addons.go:69] Setting default-storageclass=true in profile "addons-946218"
	I1207 20:02:55.843943    8179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-946218"
	I1207 20:02:55.844310    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.853216    8179 addons.go:69] Setting gcp-auth=true in profile "addons-946218"
	I1207 20:02:55.853246    8179 mustload.go:65] Loading cluster: addons-946218
	I1207 20:02:55.853485    8179 config.go:182] Loaded profile config "addons-946218": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 20:02:55.856687    8179 addons.go:69] Setting ingress=true in profile "addons-946218"
	I1207 20:02:55.858993    8179 addons.go:231] Setting addon ingress=true in "addons-946218"
	I1207 20:02:55.859059    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.859488    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.856762    8179 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-946218"
	I1207 20:02:55.874738    8179 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-946218"
	I1207 20:02:55.874796    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.875235    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.884026    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.856769    8179 addons.go:69] Setting registry=true in profile "addons-946218"
	I1207 20:02:55.897150    8179 addons.go:231] Setting addon registry=true in "addons-946218"
	I1207 20:02:55.897208    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.897661    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.856780    8179 addons.go:69] Setting storage-provisioner=true in profile "addons-946218"
	I1207 20:02:55.915585    8179 addons.go:231] Setting addon storage-provisioner=true in "addons-946218"
	I1207 20:02:55.915643    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.916098    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.856787    8179 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-946218"
	I1207 20:02:55.949637    8179 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-946218"
	I1207 20:02:55.949996    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:55.972784    8179 addons.go:69] Setting ingress-dns=true in profile "addons-946218"
	I1207 20:02:55.972817    8179 addons.go:231] Setting addon ingress-dns=true in "addons-946218"
	I1207 20:02:55.972878    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:55.973447    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:56.073867    8179 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1207 20:02:56.076347    8179 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1207 20:02:56.076366    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1207 20:02:56.113485    8179 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1207 20:02:56.115810    8179 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1207 20:02:56.120728    8179 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1207 20:02:56.120749    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1207 20:02:56.120813    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.129369    8179 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1207 20:02:56.131275    8179 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 20:02:56.133107    8179 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 20:02:56.115906    8179 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 20:02:56.073823    8179 addons.go:231] Setting addon default-storageclass=true in "addons-946218"
	I1207 20:02:56.077256    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.139250    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:56.148814    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 20:02:56.160829    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1207 20:02:56.160871    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:56.164850    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1207 20:02:56.170337    8179 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-946218" context rescaled to 1 replicas
	I1207 20:02:56.172796    8179 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 20:02:56.172867    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.179048    8179 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1207 20:02:56.179071    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1207 20:02:56.179129    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.192224    8179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 20:02:56.176898    8179 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1207 20:02:56.177372    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:56.177393    8179 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 20:02:56.177410    8179 out.go:177]   - Using image docker.io/registry:2.8.3
	I1207 20:02:56.177424    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1207 20:02:56.196679    8179 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 20:02:56.197790    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.206327    8179 out.go:177] * Verifying Kubernetes components...
	I1207 20:02:56.206376    8179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:02:56.206395    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1207 20:02:56.213074    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1207 20:02:56.214569    8179 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1207 20:02:56.214650    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.216249    8179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:02:56.217921    8179 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:02:56.241376    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 20:02:56.241442    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.269996    8179 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-946218"
	I1207 20:02:56.270036    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:02:56.270467    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:02:56.282022    8179 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1207 20:02:56.283679    8179 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 20:02:56.283697    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1207 20:02:56.283757    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.281953    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.292955    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1207 20:02:56.282381    8179 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1207 20:02:56.294790    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1207 20:02:56.294865    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.325995    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1207 20:02:56.330243    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1207 20:02:56.335920    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1207 20:02:56.337704    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1207 20:02:56.341472    8179 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1207 20:02:56.344150    8179 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1207 20:02:56.344172    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1207 20:02:56.344236    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.350867    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.360653    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.419417    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.449240    8179 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 20:02:56.449260    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 20:02:56.449321    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.485693    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.503904    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.507464    8179 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1207 20:02:56.510117    8179 out.go:177]   - Using image docker.io/busybox:stable
	I1207 20:02:56.505167    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.514331    8179 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 20:02:56.514348    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1207 20:02:56.514418    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:02:56.529590    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.540102    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.556100    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.590150    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:56.590935    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:02:57.053579    8179 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1207 20:02:57.053603    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1207 20:02:57.071621    8179 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1207 20:02:57.071648    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1207 20:02:57.221478    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 20:02:57.243666    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 20:02:57.388043    8179 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1207 20:02:57.388113    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1207 20:02:57.395211    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1207 20:02:57.399848    8179 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 20:02:57.399872    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1207 20:02:57.425016    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 20:02:57.572200    8179 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1207 20:02:57.572226    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1207 20:02:57.606937    8179 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1207 20:02:57.606960    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1207 20:02:57.667604    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:02:57.693856    8179 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1207 20:02:57.693880    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1207 20:02:57.694353    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 20:02:57.701956    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 20:02:57.752545    8179 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1207 20:02:57.752617    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1207 20:02:57.813141    8179 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 20:02:57.813215    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 20:02:57.816617    8179 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1207 20:02:57.816687    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1207 20:02:57.874570    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1207 20:02:58.135310    8179 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1207 20:02:58.135382    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1207 20:02:58.189665    8179 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 20:02:58.189738    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 20:02:58.241407    8179 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1207 20:02:58.241479    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1207 20:02:58.294413    8179 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1207 20:02:58.294484    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1207 20:02:58.401220    8179 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1207 20:02:58.401245    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1207 20:02:58.404581    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 20:02:58.436094    8179 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1207 20:02:58.436120    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1207 20:02:58.507497    8179 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1207 20:02:58.507522    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1207 20:02:58.542717    8179 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 20:02:58.542751    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1207 20:02:58.592609    8179 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1207 20:02:58.592631    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1207 20:02:58.715487    8179 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1207 20:02:58.715511    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1207 20:02:58.736283    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 20:02:58.751426    8179 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1207 20:02:58.751457    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1207 20:02:58.910820    8179 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1207 20:02:58.910845    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1207 20:02:59.064174    8179 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1207 20:02:59.064249    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1207 20:02:59.156273    8179 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1207 20:02:59.156345    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1207 20:02:59.270418    8179 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.078158032s)
	I1207 20:02:59.270517    8179 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.013566939s)
	I1207 20:02:59.271341    8179 node_ready.go:35] waiting up to 6m0s for node "addons-946218" to be "Ready" ...
	I1207 20:02:59.271570    8179 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1207 20:02:59.274678    8179 node_ready.go:49] node "addons-946218" has status "Ready":"True"
	I1207 20:02:59.274741    8179 node_ready.go:38] duration metric: took 3.382282ms waiting for node "addons-946218" to be "Ready" ...
	I1207 20:02:59.274765    8179 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:02:59.281889    8179 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace to be "Ready" ...
	I1207 20:02:59.386812    8179 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1207 20:02:59.386880    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1207 20:02:59.423455    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1207 20:02:59.521689    8179 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1207 20:02:59.521710    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1207 20:02:59.629453    8179 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 20:02:59.629475    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1207 20:02:59.830754    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.609237817s)
	I1207 20:02:59.922857    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 20:03:01.300827    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:02.593418    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.349715377s)
	I1207 20:03:02.593632    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.19839646s)
	I1207 20:03:02.593704    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.168663834s)
	I1207 20:03:02.593759    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.926131884s)
	I1207 20:03:02.593796    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.899423553s)
	I1207 20:03:02.787682    8179 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1207 20:03:02.787766    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:03:02.822441    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:03:03.305340    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:03.455157    8179 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1207 20:03:03.587101    8179 addons.go:231] Setting addon gcp-auth=true in "addons-946218"
	I1207 20:03:03.587145    8179 host.go:66] Checking if "addons-946218" exists ...
	I1207 20:03:03.587627    8179 cli_runner.go:164] Run: docker container inspect addons-946218 --format={{.State.Status}}
	I1207 20:03:03.633818    8179 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1207 20:03:03.633867    8179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-946218
	I1207 20:03:03.667117    8179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/addons-946218/id_rsa Username:docker}
	I1207 20:03:05.564465    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.862394563s)
	I1207 20:03:05.564447    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.689800195s)
	I1207 20:03:05.564566    8179 addons.go:467] Verifying addon registry=true in "addons-946218"
	I1207 20:03:05.567112    8179 out.go:177] * Verifying registry addon...
	I1207 20:03:05.564757    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.16014763s)
	I1207 20:03:05.564528    8179 addons.go:467] Verifying addon ingress=true in "addons-946218"
	I1207 20:03:05.565046    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.82873131s)
	I1207 20:03:05.565141    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.14165536s)
	I1207 20:03:05.567285    8179 addons.go:467] Verifying addon metrics-server=true in "addons-946218"
	I1207 20:03:05.569564    8179 out.go:177] * Verifying ingress addon...
	W1207 20:03:05.567467    8179 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 20:03:05.570523    8179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1207 20:03:05.572843    8179 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1207 20:03:05.572982    8179 retry.go:31] will retry after 139.689825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 20:03:05.577616    8179 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 20:03:05.578249    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:05.578657    8179 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 20:03:05.578691    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:05.584513    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:05.584981    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:05.713266    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 20:03:05.807022    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:06.119866    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:06.120366    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:06.604214    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:06.605220    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:06.971649    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.048747799s)
	I1207 20:03:06.971742    8179 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-946218"
	I1207 20:03:06.973794    8179 out.go:177] * Verifying csi-hostpath-driver addon...
	I1207 20:03:06.971953    8179 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.338115664s)
	I1207 20:03:06.976692    8179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1207 20:03:06.978532    8179 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 20:03:06.980581    8179 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1207 20:03:06.982301    8179 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1207 20:03:06.982328    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1207 20:03:07.001563    8179 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 20:03:07.001594    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:07.011044    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:07.092691    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:07.094042    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:07.131001    8179 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1207 20:03:07.131074    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1207 20:03:07.232835    8179 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 20:03:07.232865    8179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1207 20:03:07.278603    8179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 20:03:07.517869    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:07.591423    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:07.592547    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:08.018206    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:08.091406    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:08.092921    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:08.326242    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:08.461661    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.748300983s)
	I1207 20:03:08.518022    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:08.604109    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:08.604895    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:08.752059    8179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.473418177s)
	I1207 20:03:08.755065    8179 addons.go:467] Verifying addon gcp-auth=true in "addons-946218"
	I1207 20:03:08.758283    8179 out.go:177] * Verifying gcp-auth addon...
	I1207 20:03:08.761372    8179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1207 20:03:08.766760    8179 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1207 20:03:08.766783    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:08.770022    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:09.020370    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:09.093273    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:09.094563    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:09.274708    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:09.517854    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:09.592650    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:09.593855    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:09.777669    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:10.020186    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:10.093578    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:10.094892    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:10.273951    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:10.517626    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:10.590495    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:10.591593    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:10.778249    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:10.801413    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:11.017647    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:11.089543    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:11.090699    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:11.274460    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:11.517293    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:11.595828    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:11.600227    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:11.775773    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:12.017439    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:12.091990    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:12.092960    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:12.273751    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:12.517131    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:12.590589    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:12.591301    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:12.774209    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:13.017476    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:13.089073    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:13.089829    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:13.274200    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:13.300621    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:13.517443    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:13.590043    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:13.591456    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:13.774104    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:14.017681    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:14.092196    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:14.093324    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:14.273834    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:14.517784    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:14.589634    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:14.590639    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:14.774588    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:15.029321    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:15.095434    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:15.114347    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:15.274172    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:15.516388    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:15.591601    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:15.592971    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 20:03:15.775322    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:15.800629    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:16.017620    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:16.091298    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:16.091832    8179 kapi.go:107] duration metric: took 10.521312733s to wait for kubernetes.io/minikube-addons=registry ...
	I1207 20:03:16.274397    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:16.517591    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:16.589040    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:16.773968    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:17.027362    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:17.089685    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:17.275415    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:17.517243    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:17.589750    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:17.774226    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:17.800831    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:18.018448    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:18.089613    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:18.274836    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:18.516977    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:18.589028    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:18.773733    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:19.018440    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:19.089355    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:19.274188    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:19.517654    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:19.589549    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:19.774475    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:20.022259    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:20.089824    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:20.273438    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:20.301347    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:20.517444    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:20.589294    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:20.773768    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:21.017007    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:21.089430    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:21.274078    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:21.517280    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:21.589272    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:21.773978    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:22.018159    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:22.089474    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:22.273828    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:22.516060    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:22.589395    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:22.774048    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:22.800307    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:23.018021    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:23.090058    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:23.273437    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:23.517485    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:23.589762    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:23.773977    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:24.018489    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:24.089638    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:24.274537    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:24.518290    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:24.588626    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:24.775152    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:24.802003    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:25.017629    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:25.090152    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:25.273691    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:25.517485    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:25.589297    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:25.773865    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:26.020031    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:26.089562    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:26.274379    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:26.516984    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:26.590215    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:26.773817    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:27.017865    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:27.089987    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:27.274002    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:27.300659    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:27.523186    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:27.590410    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:27.774564    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:28.022543    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:28.089331    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:28.274375    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:28.516827    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:28.589054    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:28.774220    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:29.017008    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:29.089681    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:29.274782    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:29.301994    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:29.522763    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:29.589689    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:29.775605    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:30.035442    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:30.092359    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:30.276985    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:30.516790    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:30.589566    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:30.774237    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:31.018160    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:31.089367    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:31.274389    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:31.517821    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:31.589426    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:31.773883    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:31.801254    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:32.017113    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:32.089708    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:32.276653    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:32.518217    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:32.589380    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:32.774044    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:33.017855    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:33.089773    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:33.274783    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:33.517187    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:33.598000    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:33.774449    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:33.806803    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:34.018435    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:34.090518    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:34.274433    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:34.518464    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:34.594343    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:34.773739    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:35.022048    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:35.089908    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:35.274282    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:35.516681    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:35.588864    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:35.773560    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:36.018935    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:36.094083    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:36.286117    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:36.300608    8179 pod_ready.go:102] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"False"
	I1207 20:03:36.519854    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:36.588898    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:36.773619    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:36.804080    8179 pod_ready.go:92] pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace has status "Ready":"True"
	I1207 20:03:36.804102    8179 pod_ready.go:81] duration metric: took 37.522132668s waiting for pod "coredns-5dd5756b68-87sl4" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.804113    8179 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-946218" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.821446    8179 pod_ready.go:92] pod "etcd-addons-946218" in "kube-system" namespace has status "Ready":"True"
	I1207 20:03:36.821474    8179 pod_ready.go:81] duration metric: took 17.35357ms waiting for pod "etcd-addons-946218" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.821486    8179 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-946218" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.829748    8179 pod_ready.go:92] pod "kube-apiserver-addons-946218" in "kube-system" namespace has status "Ready":"True"
	I1207 20:03:36.829846    8179 pod_ready.go:81] duration metric: took 8.347442ms waiting for pod "kube-apiserver-addons-946218" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.829905    8179 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-946218" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.839519    8179 pod_ready.go:92] pod "kube-controller-manager-addons-946218" in "kube-system" namespace has status "Ready":"True"
	I1207 20:03:36.839588    8179 pod_ready.go:81] duration metric: took 9.657946ms waiting for pod "kube-controller-manager-addons-946218" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.839614    8179 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t6tdx" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.849787    8179 pod_ready.go:92] pod "kube-proxy-t6tdx" in "kube-system" namespace has status "Ready":"True"
	I1207 20:03:36.849858    8179 pod_ready.go:81] duration metric: took 10.225071ms waiting for pod "kube-proxy-t6tdx" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:36.849882    8179 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-946218" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:37.018137    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:37.089735    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:37.198428    8179 pod_ready.go:92] pod "kube-scheduler-addons-946218" in "kube-system" namespace has status "Ready":"True"
	I1207 20:03:37.198497    8179 pod_ready.go:81] duration metric: took 348.594806ms waiting for pod "kube-scheduler-addons-946218" in "kube-system" namespace to be "Ready" ...
	I1207 20:03:37.198522    8179 pod_ready.go:38] duration metric: took 37.923732596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:03:37.198572    8179 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:03:37.198655    8179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:03:37.218270    8179 api_server.go:72] duration metric: took 41.023833459s to wait for apiserver process to appear ...
	I1207 20:03:37.218337    8179 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:03:37.218367    8179 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 20:03:37.228279    8179 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 20:03:37.229899    8179 api_server.go:141] control plane version: v1.28.4
	I1207 20:03:37.229919    8179 api_server.go:131] duration metric: took 11.563834ms to wait for apiserver health ...
	I1207 20:03:37.229928    8179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:03:37.274743    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:37.405242    8179 system_pods.go:59] 17 kube-system pods found
	I1207 20:03:37.405328    8179 system_pods.go:61] "coredns-5dd5756b68-87sl4" [940861a1-dd18-48c7-9757-da06c9ac735a] Running
	I1207 20:03:37.405352    8179 system_pods.go:61] "csi-hostpath-attacher-0" [145f2b2b-baa8-4da4-9a09-43f01eb7169c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 20:03:37.405378    8179 system_pods.go:61] "csi-hostpath-resizer-0" [32edb4a5-9f67-4e22-8e62-f565c69909e2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 20:03:37.405419    8179 system_pods.go:61] "csi-hostpathplugin-vddln" [89ac7dd2-6d3c-4306-89fb-0f13e6848ada] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 20:03:37.405439    8179 system_pods.go:61] "etcd-addons-946218" [6883eb4a-4422-4737-98ef-b8c420c8a4bb] Running
	I1207 20:03:37.405460    8179 system_pods.go:61] "kube-apiserver-addons-946218" [6fcf420f-a09e-43c1-aa06-92fa28bbb164] Running
	I1207 20:03:37.405491    8179 system_pods.go:61] "kube-controller-manager-addons-946218" [1cba9fe0-bb3d-4fad-973b-6526a279dc6b] Running
	I1207 20:03:37.405518    8179 system_pods.go:61] "kube-ingress-dns-minikube" [95ff49c4-8d57-456f-93e2-0e4f3819d2df] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 20:03:37.405538    8179 system_pods.go:61] "kube-proxy-t6tdx" [acf79489-4f26-418b-9de6-e1a72c27596d] Running
	I1207 20:03:37.405560    8179 system_pods.go:61] "kube-scheduler-addons-946218" [4bb38dec-f323-44b2-b0de-ed93e55f9969] Running
	I1207 20:03:37.405593    8179 system_pods.go:61] "metrics-server-7c66d45ddc-hc4mm" [796d70f9-0a3c-4906-923f-5239ec4a547f] Running
	I1207 20:03:37.405613    8179 system_pods.go:61] "nvidia-device-plugin-daemonset-pq9kj" [c8d63810-cef8-46ea-8b3f-4a331c68a9ce] Running
	I1207 20:03:37.405633    8179 system_pods.go:61] "registry-proxy-88zd5" [05cfdd65-400d-46d7-a81d-b22181d9c3d1] Running
	I1207 20:03:37.405655    8179 system_pods.go:61] "registry-vbggm" [f9501618-888e-41c1-87bc-c0c145626641] Running
	I1207 20:03:37.405691    8179 system_pods.go:61] "snapshot-controller-58dbcc7b99-7h4t9" [9c7d819d-d910-4377-bb62-2f0a03a55e45] Running
	I1207 20:03:37.405714    8179 system_pods.go:61] "snapshot-controller-58dbcc7b99-f54xl" [e4fc1f3b-ebaa-446b-bb56-f6db137f1aa5] Running
	I1207 20:03:37.405733    8179 system_pods.go:61] "storage-provisioner" [50e8b55e-af1d-4d96-85a4-723e4f951ca3] Running
	I1207 20:03:37.405754    8179 system_pods.go:74] duration metric: took 175.819819ms to wait for pod list to return data ...
	I1207 20:03:37.405776    8179 default_sa.go:34] waiting for default service account to be created ...
	I1207 20:03:37.517060    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:37.589914    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:37.597789    8179 default_sa.go:45] found service account: "default"
	I1207 20:03:37.597858    8179 default_sa.go:55] duration metric: took 192.050708ms for default service account to be created ...
	I1207 20:03:37.597881    8179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 20:03:37.774790    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:37.806873    8179 system_pods.go:86] 17 kube-system pods found
	I1207 20:03:37.806947    8179 system_pods.go:89] "coredns-5dd5756b68-87sl4" [940861a1-dd18-48c7-9757-da06c9ac735a] Running
	I1207 20:03:37.806973    8179 system_pods.go:89] "csi-hostpath-attacher-0" [145f2b2b-baa8-4da4-9a09-43f01eb7169c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1207 20:03:37.807000    8179 system_pods.go:89] "csi-hostpath-resizer-0" [32edb4a5-9f67-4e22-8e62-f565c69909e2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1207 20:03:37.807043    8179 system_pods.go:89] "csi-hostpathplugin-vddln" [89ac7dd2-6d3c-4306-89fb-0f13e6848ada] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1207 20:03:37.807061    8179 system_pods.go:89] "etcd-addons-946218" [6883eb4a-4422-4737-98ef-b8c420c8a4bb] Running
	I1207 20:03:37.807082    8179 system_pods.go:89] "kube-apiserver-addons-946218" [6fcf420f-a09e-43c1-aa06-92fa28bbb164] Running
	I1207 20:03:37.807114    8179 system_pods.go:89] "kube-controller-manager-addons-946218" [1cba9fe0-bb3d-4fad-973b-6526a279dc6b] Running
	I1207 20:03:37.807138    8179 system_pods.go:89] "kube-ingress-dns-minikube" [95ff49c4-8d57-456f-93e2-0e4f3819d2df] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1207 20:03:37.807156    8179 system_pods.go:89] "kube-proxy-t6tdx" [acf79489-4f26-418b-9de6-e1a72c27596d] Running
	I1207 20:03:37.807177    8179 system_pods.go:89] "kube-scheduler-addons-946218" [4bb38dec-f323-44b2-b0de-ed93e55f9969] Running
	I1207 20:03:37.807199    8179 system_pods.go:89] "metrics-server-7c66d45ddc-hc4mm" [796d70f9-0a3c-4906-923f-5239ec4a547f] Running
	I1207 20:03:37.807233    8179 system_pods.go:89] "nvidia-device-plugin-daemonset-pq9kj" [c8d63810-cef8-46ea-8b3f-4a331c68a9ce] Running
	I1207 20:03:37.807252    8179 system_pods.go:89] "registry-proxy-88zd5" [05cfdd65-400d-46d7-a81d-b22181d9c3d1] Running
	I1207 20:03:37.807274    8179 system_pods.go:89] "registry-vbggm" [f9501618-888e-41c1-87bc-c0c145626641] Running
	I1207 20:03:37.807308    8179 system_pods.go:89] "snapshot-controller-58dbcc7b99-7h4t9" [9c7d819d-d910-4377-bb62-2f0a03a55e45] Running
	I1207 20:03:37.807330    8179 system_pods.go:89] "snapshot-controller-58dbcc7b99-f54xl" [e4fc1f3b-ebaa-446b-bb56-f6db137f1aa5] Running
	I1207 20:03:37.807347    8179 system_pods.go:89] "storage-provisioner" [50e8b55e-af1d-4d96-85a4-723e4f951ca3] Running
	I1207 20:03:37.807368    8179 system_pods.go:126] duration metric: took 209.469491ms to wait for k8s-apps to be running ...
	I1207 20:03:37.807389    8179 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:03:37.807467    8179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:03:37.823714    8179 system_svc.go:56] duration metric: took 16.318002ms WaitForService to wait for kubelet.
	I1207 20:03:37.823736    8179 kubeadm.go:581] duration metric: took 41.629305936s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:03:37.823756    8179 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:03:37.998575    8179 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1207 20:03:37.998604    8179 node_conditions.go:123] node cpu capacity is 2
	I1207 20:03:37.998616    8179 node_conditions.go:105] duration metric: took 174.855397ms to run NodePressure ...
	I1207 20:03:37.998629    8179 start.go:228] waiting for startup goroutines ...
	I1207 20:03:38.018820    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:38.090420    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:38.274231    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:38.517933    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:38.594445    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:38.774284    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:39.020600    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:39.090050    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:39.273744    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:39.517876    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:39.589251    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:39.774700    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:40.036762    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:40.089358    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:40.274186    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:40.517620    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:40.589924    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:40.774767    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:41.017779    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:41.089511    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:41.274572    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:41.518288    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:41.593234    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:41.778673    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:42.031977    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:42.156251    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:42.290306    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:42.517476    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:42.590319    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:42.774423    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:43.017255    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:43.089990    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:43.273648    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:43.517339    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:43.595646    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:43.774825    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:44.017176    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:44.089813    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:44.273809    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:44.518288    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:44.590151    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:44.786409    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:45.026779    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:45.097244    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:45.282226    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:45.516611    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:45.589469    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:45.774279    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:46.017215    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:46.089990    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:46.274113    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:46.517457    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:46.589648    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:46.774554    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:47.017477    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:47.089284    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:47.273943    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:47.516883    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:47.589888    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:47.774867    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:48.020340    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:48.090593    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:48.290744    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:48.517850    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:48.589097    8179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 20:03:48.773664    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:49.018302    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:49.090685    8179 kapi.go:107] duration metric: took 43.517839546s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1207 20:03:49.274745    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:49.517282    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:49.774137    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:50.022301    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:50.274439    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:50.516967    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:50.774568    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:51.017295    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:51.273852    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:51.517223    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:51.778949    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:52.018271    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:52.274140    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:52.516881    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:52.774629    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:53.017952    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:53.279320    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:53.517545    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:53.774566    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:54.019348    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:54.274338    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:54.516354    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:54.773953    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:55.017395    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:55.274041    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:55.517143    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:55.774471    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:56.018898    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:56.273461    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:56.516162    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 20:03:56.774065    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:57.021829    8179 kapi.go:107] duration metric: took 50.045132999s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1207 20:03:57.277283    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:57.774253    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:58.273381    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:58.773922    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:59.273680    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:03:59.774616    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:00.311594    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:00.773710    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:01.273608    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:01.774187    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:02.274138    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:02.774108    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:03.273971    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:03.773406    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:04.273884    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:04.773742    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:05.273580    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:05.774144    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:06.274185    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:06.774240    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:07.274286    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:07.773684    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:08.273918    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:08.774576    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:09.274072    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:09.774207    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:10.274256    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:10.773335    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:11.273635    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:11.774119    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:12.274099    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:12.774027    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:13.274073    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:13.774139    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:14.273729    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:14.773740    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:15.277801    8179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 20:04:15.773249    8179 kapi.go:107] duration metric: took 1m7.011872852s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1207 20:04:15.775106    8179 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-946218 cluster.
	I1207 20:04:15.777195    8179 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1207 20:04:15.778673    8179 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1207 20:04:15.780492    8179 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1207 20:04:15.782283    8179 addons.go:502] enable addons completed in 1m19.961268887s: enabled=[default-storageclass cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin storage-provisioner-rancher inspektor-gadget metrics-server volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1207 20:04:15.782318    8179 start.go:233] waiting for cluster config update ...
	I1207 20:04:15.782338    8179 start.go:242] writing updated cluster config ...
	I1207 20:04:15.783097    8179 ssh_runner.go:195] Run: rm -f paused
	I1207 20:04:16.101758    8179 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1207 20:04:16.104016    8179 out.go:177] * Done! kubectl is now configured to use "addons-946218" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Dec 07 20:05:03 addons-946218 dockerd[1094]: time="2023-12-07T20:05:03.921038067Z" level=info msg="ignoring event" container=cf065f5d302a82e5fd2f98e26daae9db4951fa3281f2b281026e7daeb2472be4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:04 addons-946218 cri-dockerd[1305]: time="2023-12-07T20:05:04Z" level=info msg="Stop pulling image gcr.io/google-samples/hello-app:1.0: Status: Downloaded newer image for gcr.io/google-samples/hello-app:1.0"
	Dec 07 20:05:04 addons-946218 dockerd[1094]: time="2023-12-07T20:05:04.192923627Z" level=info msg="ignoring event" container=55efbfff40c194641abab7a4bd25fc8e40f35641ea09f28ca68b0ee3f66e75b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:04 addons-946218 dockerd[1094]: time="2023-12-07T20:05:04.767794295Z" level=info msg="ignoring event" container=b919099d7cbbbdbb01d3d2c1f0094b5ebc6ce1dab5c9d7bfe95d33f88692f181 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:09 addons-946218 dockerd[1094]: time="2023-12-07T20:05:09.500698082Z" level=info msg="ignoring event" container=cb2f8f76692e815b0317e2b0fac4976298e210ff28142dcbbb676dbd2059e8ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:09 addons-946218 dockerd[1094]: time="2023-12-07T20:05:09.527509749Z" level=info msg="ignoring event" container=360401141c05b40b46060c5a05ec89adf35b65d1af488c5daf8e814ba374dd6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:09 addons-946218 dockerd[1094]: time="2023-12-07T20:05:09.648010089Z" level=info msg="ignoring event" container=c5098e32a885ae94c9e18971b4cf8cdf8b43d0d57c59980f7b417ff2d18d0fe2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:09 addons-946218 dockerd[1094]: time="2023-12-07T20:05:09.696182907Z" level=info msg="ignoring event" container=c5c0b385d775980bf25c790110c6ded9f22d872d0cda63cf8f2f66cbfc80ec79 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:11 addons-946218 cri-dockerd[1305]: time="2023-12-07T20:05:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7776ed5b177a060c0e25a5fa64c5387b245f676ce8e3f77874367bab11267187/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 07 20:05:11 addons-946218 dockerd[1094]: time="2023-12-07T20:05:11.247350333Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 07 20:05:11 addons-946218 cri-dockerd[1305]: time="2023-12-07T20:05:11Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 07 20:05:11 addons-946218 dockerd[1094]: time="2023-12-07T20:05:11.970578251Z" level=info msg="ignoring event" container=b7f74386d1e543ca04649d0957c1296df1281ddd1b37a6033d7184315b6ad4fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:14 addons-946218 dockerd[1094]: time="2023-12-07T20:05:14.075855526Z" level=info msg="ignoring event" container=7776ed5b177a060c0e25a5fa64c5387b245f676ce8e3f77874367bab11267187 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:16 addons-946218 cri-dockerd[1305]: time="2023-12-07T20:05:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/16b519c3911ba045908d0afd71c79f2020f54d2449b369b2a6dda44f6a614d11/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 07 20:05:16 addons-946218 dockerd[1094]: time="2023-12-07T20:05:16.512568932Z" level=info msg="ignoring event" container=d91c20ff03d0ea7dc0626a284746eff1c89c695ef9a94272061c3a5f67daca7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:16 addons-946218 cri-dockerd[1305]: time="2023-12-07T20:05:16Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Dec 07 20:05:17 addons-946218 dockerd[1094]: time="2023-12-07T20:05:17.084569521Z" level=info msg="ignoring event" container=c71980c222a3bd2d23b710100d28e0cfe972e4806cac5936e2f45b911aded3d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:18 addons-946218 dockerd[1094]: time="2023-12-07T20:05:18.321597996Z" level=info msg="ignoring event" container=16b519c3911ba045908d0afd71c79f2020f54d2449b369b2a6dda44f6a614d11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:20 addons-946218 cri-dockerd[1305]: time="2023-12-07T20:05:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5533ccc0263224c1793492c2e40ac9b0eb3a4dc4ff4fe86ec0b5ba5cba9a4799/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Dec 07 20:05:20 addons-946218 dockerd[1094]: time="2023-12-07T20:05:20.278321283Z" level=info msg="ignoring event" container=10a05235c152f3fc45c87326a7a5bebe33c3481005dff14fcb1c958328758d63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:20 addons-946218 dockerd[1094]: time="2023-12-07T20:05:20.797217559Z" level=info msg="ignoring event" container=d052eb56a118a0a2584aea37272b9c1e564489bba4cfb085b707c5e14e3ff4d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:20 addons-946218 dockerd[1094]: time="2023-12-07T20:05:20.899441374Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163
	Dec 07 20:05:20 addons-946218 dockerd[1094]: time="2023-12-07T20:05:20.997920088Z" level=info msg="ignoring event" container=4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:21 addons-946218 dockerd[1094]: time="2023-12-07T20:05:21.123751371Z" level=info msg="ignoring event" container=e5c89fa1327ab0fae0924d95340c066fc3ca6a6b91357b67e48be9a34333dee6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:21 addons-946218 dockerd[1094]: time="2023-12-07T20:05:21.458845933Z" level=info msg="ignoring event" container=5533ccc0263224c1793492c2e40ac9b0eb3a4dc4ff4fe86ec0b5ba5cba9a4799 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	d052eb56a118a       dd1b12fcb6097                                                                                                                6 seconds ago        Exited              hello-world-app            2                   7f0363c64c8df       hello-world-app-5d77478584-mpj5l
	10a05235c152f       fc9db2894f4e4                                                                                                                6 seconds ago        Exited              helper-pod                 0                   5533ccc026322       helper-pod-delete-pvc-6224022a-bf0c-43f9-b398-1fc2163a085b
	c71980c222a3b       busybox@sha256:1ceb872bcc68a8fcd34c97952658b58086affdcb604c90c1dee2735bde5edc2f                                              10 seconds ago       Exited              busybox                    0                   16b519c3911ba       test-local-path
	b7f74386d1e54       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              15 seconds ago       Exited              helper-pod                 0                   7776ed5b177a0       helper-pod-create-pvc-6224022a-bf0c-43f9-b398-1fc2163a085b
	abdb137054e28       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                                35 seconds ago       Running             nginx                      0                   cc59cf879cc97       nginx
	756627c4ffc90       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                   0                   b18fe78bdfe3f       gcp-auth-d4c87556c-npr8p
	38dae9c94ed72       af594c6a879f2                                                                                                                About a minute ago   Exited              patch                      1                   20db6a82216fd       ingress-nginx-admission-patch-q8q2r
	1d7e79033dae7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              create                     0                   12ed10cb501ac       ingress-nginx-admission-create-k7nq7
	aaf473fd6d5f8       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       About a minute ago   Running             local-path-provisioner     0                   f851ecd18561a       local-path-provisioner-78b46b4d5c-nvtlm
	c8cc401b5adac       gcr.io/cloud-spanner-emulator/emulator@sha256:9ded3fac22d4d1c85ae51473e3876e2377f5179192fea664409db0fe87e05ece               2 minutes ago        Running             cloud-spanner-emulator     0                   a41f1d2ac4c91       cloud-spanner-emulator-5649c69bf6-mx6tg
	869c65374f733       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                     2 minutes ago        Running             nvidia-device-plugin-ctr   0                   94d4554a37f9a       nvidia-device-plugin-daemonset-pq9kj
	cc59fd8925b34       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner        0                   12daa2e04df1a       storage-provisioner
	57bbb646b381c       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                    0                   ebc440c71c4b7       coredns-5dd5756b68-87sl4
	90b5454ab4cd0       3ca3ca488cf13                                                                                                                2 minutes ago        Running             kube-proxy                 0                   315e2ec7e69b0       kube-proxy-t6tdx
	49427c98d9f8e       9961cbceaf234                                                                                                                2 minutes ago        Running             kube-controller-manager    0                   24c264dcfef06       kube-controller-manager-addons-946218
	cb07176ac708d       05c284c929889                                                                                                                2 minutes ago        Running             kube-scheduler             0                   23d6d9754f8f0       kube-scheduler-addons-946218
	427abc39042f1       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                       0                   68d7595002ab4       etcd-addons-946218
	342a21508a16a       04b4c447bb9d4                                                                                                                2 minutes ago        Running             kube-apiserver             0                   13178e7cd9916       kube-apiserver-addons-946218
	
	* 
	* ==> coredns [57bbb646b381] <==
	* [INFO] 10.244.0.18:58171 - 26841 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083076s
	[INFO] 10.244.0.18:58171 - 44717 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052816s
	[INFO] 10.244.0.18:46396 - 57267 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002269897s
	[INFO] 10.244.0.18:46396 - 4898 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000110925s
	[INFO] 10.244.0.18:58171 - 30843 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001709574s
	[INFO] 10.244.0.18:58171 - 32781 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001026764s
	[INFO] 10.244.0.18:58171 - 34739 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062851s
	[INFO] 10.244.0.18:56225 - 8002 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000127318s
	[INFO] 10.244.0.18:46402 - 2086 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057198s
	[INFO] 10.244.0.18:56225 - 23846 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004384s
	[INFO] 10.244.0.18:46402 - 39186 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000114403s
	[INFO] 10.244.0.18:56225 - 58236 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000131995s
	[INFO] 10.244.0.18:46402 - 1037 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000090403s
	[INFO] 10.244.0.18:56225 - 29428 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000188708s
	[INFO] 10.244.0.18:46402 - 33915 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076315s
	[INFO] 10.244.0.18:56225 - 11041 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055007s
	[INFO] 10.244.0.18:46402 - 27864 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000076266s
	[INFO] 10.244.0.18:56225 - 45899 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061136s
	[INFO] 10.244.0.18:46402 - 47361 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000131437s
	[INFO] 10.244.0.18:46402 - 3997 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001947726s
	[INFO] 10.244.0.18:56225 - 10564 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002189291s
	[INFO] 10.244.0.18:56225 - 29015 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001772334s
	[INFO] 10.244.0.18:46402 - 46177 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001320136s
	[INFO] 10.244.0.18:56225 - 56364 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000064426s
	[INFO] 10.244.0.18:46402 - 14832 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049534s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-946218
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-946218
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=addons-946218
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T20_02_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-946218
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:02:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-946218
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:05:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:05:16 +0000   Thu, 07 Dec 2023 20:02:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:05:16 +0000   Thu, 07 Dec 2023 20:02:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:05:16 +0000   Thu, 07 Dec 2023 20:02:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:05:16 +0000   Thu, 07 Dec 2023 20:02:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-946218
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 32721876353a4a3683a7da666da8bcf3
	  System UUID:                4fa5688d-8776-4d1a-9879-2e8eb500ea66
	  Boot ID:                    654d4215-4a80-4da6-8d0f-f014f59dffc2
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-mx6tg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	  default                     hello-world-app-5d77478584-mpj5l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  gcp-auth                    gcp-auth-d4c87556c-npr8p                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 coredns-5dd5756b68-87sl4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m30s
	  kube-system                 etcd-addons-946218                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-apiserver-addons-946218               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-controller-manager-addons-946218      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-proxy-t6tdx                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-scheduler-addons-946218               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 nvidia-device-plugin-daemonset-pq9kj       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  local-path-storage          local-path-provisioner-78b46b4d5c-nvtlm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node addons-946218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node addons-946218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x7 over 2m52s)  kubelet          Node addons-946218 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m44s                  kubelet          Node addons-946218 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s                  kubelet          Node addons-946218 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s                  kubelet          Node addons-946218 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m44s                  kubelet          Node addons-946218 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m34s                  kubelet          Node addons-946218 status is now: NodeReady
	  Normal  RegisteredNode           2m32s                  node-controller  Node addons-946218 event: Registered Node addons-946218 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec 7 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015157] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.345860] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.584547] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [427abc39042f] <==
	* {"level":"info","ts":"2023-12-07T20:02:35.189525Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-12-07T20:02:35.1899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-12-07T20:02:35.190249Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-12-07T20:02:35.190062Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-07T20:02:35.190276Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-12-07T20:02:35.191817Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-07T20:02:35.191847Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-07T20:02:35.620802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-07T20:02:35.621015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-07T20:02:35.621132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-12-07T20:02:35.621227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-12-07T20:02:35.621318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-07T20:02:35.621409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-12-07T20:02:35.621518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-12-07T20:02:35.624786Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-946218 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T20:02:35.625697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:02:35.626933Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T20:02:35.627144Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:02:35.627448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:02:35.63572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-12-07T20:02:35.636338Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:02:35.636537Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:02:35.636653Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:02:35.640729Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T20:02:35.640882Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [756627c4ffc9] <==
	* 2023/12/07 20:04:15 GCP Auth Webhook started!
	2023/12/07 20:04:26 Ready to marshal response ...
	2023/12/07 20:04:26 Ready to write response ...
	2023/12/07 20:04:36 Ready to marshal response ...
	2023/12/07 20:04:36 Ready to write response ...
	2023/12/07 20:04:49 Ready to marshal response ...
	2023/12/07 20:04:49 Ready to write response ...
	2023/12/07 20:04:53 Ready to marshal response ...
	2023/12/07 20:04:53 Ready to write response ...
	2023/12/07 20:04:59 Ready to marshal response ...
	2023/12/07 20:04:59 Ready to write response ...
	2023/12/07 20:05:10 Ready to marshal response ...
	2023/12/07 20:05:10 Ready to write response ...
	2023/12/07 20:05:10 Ready to marshal response ...
	2023/12/07 20:05:10 Ready to write response ...
	2023/12/07 20:05:19 Ready to marshal response ...
	2023/12/07 20:05:19 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:05:26 up 47 min,  0 users,  load average: 1.61, 1.49, 0.66
	Linux addons-946218 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [342a21508a16] <==
	* I1207 20:04:42.482457       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1207 20:04:43.500698       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1207 20:04:48.661434       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1207 20:04:49.036027       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1207 20:04:49.388958       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.5.87"}
	I1207 20:05:00.560536       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.255.99"}
	I1207 20:05:09.276615       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:09.277192       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:09.287500       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:09.288352       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:09.296852       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:09.296947       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:09.307667       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:09.307716       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:09.326955       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:09.327226       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:09.340066       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:09.340129       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:09.365437       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:09.365493       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:09.370153       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:09.370252       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1207 20:05:10.297652       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1207 20:05:10.370710       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1207 20:05:10.389245       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [49427c98d9f8] <==
	* W1207 20:05:13.988291       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:13.988330       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:14.418449       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:14.418484       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:15.159966       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:15.160009       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:17.215663       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:17.215694       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:17.580902       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:17.580937       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1207 20:05:17.856676       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1207 20:05:17.857526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="96.196µs"
	I1207 20:05:17.866158       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1207 20:05:19.156556       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:19.156589       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1207 20:05:20.182766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="17.51µs"
	I1207 20:05:21.394806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.121µs"
	W1207 20:05:23.765835       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:23.765868       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:25.199436       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:25.199672       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1207 20:05:25.273059       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1207 20:05:25.273330       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 20:05:25.725834       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1207 20:05:25.725890       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [90b5454ab4cd] <==
	* I1207 20:02:57.111488       1 server_others.go:69] "Using iptables proxy"
	I1207 20:02:57.143889       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1207 20:02:57.321183       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1207 20:02:57.325748       1 server_others.go:152] "Using iptables Proxier"
	I1207 20:02:57.325781       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1207 20:02:57.325788       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1207 20:02:57.325834       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 20:02:57.326034       1 server.go:846] "Version info" version="v1.28.4"
	I1207 20:02:57.326045       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:02:57.327070       1 config.go:188] "Starting service config controller"
	I1207 20:02:57.327121       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 20:02:57.327144       1 config.go:97] "Starting endpoint slice config controller"
	I1207 20:02:57.327148       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 20:02:57.329602       1 config.go:315] "Starting node config controller"
	I1207 20:02:57.329614       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 20:02:57.427207       1 shared_informer.go:318] Caches are synced for service config
	I1207 20:02:57.427275       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 20:02:57.429945       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [cb07176ac708] <==
	* W1207 20:02:39.401545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1207 20:02:39.402621       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1207 20:02:39.401578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 20:02:39.401614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 20:02:39.402874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 20:02:39.402848       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 20:02:39.403410       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:02:39.403528       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 20:02:39.403668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 20:02:39.403748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 20:02:40.212981       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:02:40.213322       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 20:02:40.237879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 20:02:40.237923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 20:02:40.240301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:02:40.240519       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 20:02:40.372208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 20:02:40.372469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 20:02:40.384505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1207 20:02:40.384762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1207 20:02:40.424265       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 20:02:40.424304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 20:02:40.484260       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 20:02:40.484299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1207 20:02:41.987884       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.309745    2322 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294b182c-8c00-4bd0-a19d-370f998cc8b3-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "294b182c-8c00-4bd0-a19d-370f998cc8b3" (UID: "294b182c-8c00-4bd0-a19d-370f998cc8b3"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.310131    2322 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294b182c-8c00-4bd0-a19d-370f998cc8b3-kube-api-access-cdldn" (OuterVolumeSpecName: "kube-api-access-cdldn") pod "294b182c-8c00-4bd0-a19d-370f998cc8b3" (UID: "294b182c-8c00-4bd0-a19d-370f998cc8b3"). InnerVolumeSpecName "kube-api-access-cdldn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.356620    2322 scope.go:117] "RemoveContainer" containerID="4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163"
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.381937    2322 scope.go:117] "RemoveContainer" containerID="4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163"
	Dec 07 20:05:21 addons-946218 kubelet[2322]: E1207 20:05:21.383138    2322 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163" containerID="4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163"
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.383194    2322 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163"} err="failed to get container status \"4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4663908f1e2cbdcacd9a821db19e79e46403f7ef10b3b1fbedba515e86ff1163"
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.384675    2322 scope.go:117] "RemoveContainer" containerID="b919099d7cbbbdbb01d3d2c1f0094b5ebc6ce1dab5c9d7bfe95d33f88692f181"
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.385149    2322 scope.go:117] "RemoveContainer" containerID="d052eb56a118a0a2584aea37272b9c1e564489bba4cfb085b707c5e14e3ff4d3"
	Dec 07 20:05:21 addons-946218 kubelet[2322]: E1207 20:05:21.397914    2322 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-mpj5l_default(fdc5fefe-1fa6-4bdf-b0b9-e619968c2142)\"" pod="default/hello-world-app-5d77478584-mpj5l" podUID="fdc5fefe-1fa6-4bdf-b0b9-e619968c2142"
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.409236    2322 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cdldn\" (UniqueName: \"kubernetes.io/projected/294b182c-8c00-4bd0-a19d-370f998cc8b3-kube-api-access-cdldn\") on node \"addons-946218\" DevicePath \"\""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.409284    2322 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/294b182c-8c00-4bd0-a19d-370f998cc8b3-webhook-cert\") on node \"addons-946218\" DevicePath \"\""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.510201    2322 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-gcp-creds\") pod \"5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7\" (UID: \"5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7\") "
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.510448    2322 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7" (UID: "5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.510516    2322 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7cbk\" (UniqueName: \"kubernetes.io/projected/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-kube-api-access-t7cbk\") pod \"5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7\" (UID: \"5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7\") "
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.510628    2322 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-script\") pod \"5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7\" (UID: \"5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7\") "
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.510673    2322 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-data\") pod \"5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7\" (UID: \"5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7\") "
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.510780    2322 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-gcp-creds\") on node \"addons-946218\" DevicePath \"\""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.510853    2322 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-data" (OuterVolumeSpecName: "data") pod "5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7" (UID: "5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.512463    2322 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-script" (OuterVolumeSpecName: "script") pod "5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7" (UID: "5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.516891    2322 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-kube-api-access-t7cbk" (OuterVolumeSpecName: "kube-api-access-t7cbk") pod "5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7" (UID: "5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7"). InnerVolumeSpecName "kube-api-access-t7cbk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.611055    2322 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t7cbk\" (UniqueName: \"kubernetes.io/projected/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-kube-api-access-t7cbk\") on node \"addons-946218\" DevicePath \"\""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.611089    2322 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-script\") on node \"addons-946218\" DevicePath \"\""
	Dec 07 20:05:21 addons-946218 kubelet[2322]: I1207 20:05:21.611101    2322 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5c7c8634-3e0a-46c4-9ff4-6644b1fce6c7-data\") on node \"addons-946218\" DevicePath \"\""
	Dec 07 20:05:22 addons-946218 kubelet[2322]: I1207 20:05:22.411081    2322 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5533ccc0263224c1793492c2e40ac9b0eb3a4dc4ff4fe86ec0b5ba5cba9a4799"
	Dec 07 20:05:22 addons-946218 kubelet[2322]: I1207 20:05:22.661677    2322 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="294b182c-8c00-4bd0-a19d-370f998cc8b3" path="/var/lib/kubelet/pods/294b182c-8c00-4bd0-a19d-370f998cc8b3/volumes"
	
	* 
	* ==> storage-provisioner [cc59fd8925b3] <==
	* I1207 20:03:04.897809       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 20:03:04.928508       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 20:03:04.929851       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 20:03:04.945078       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 20:03:04.945262       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-946218_e3713f22-599b-4515-b325-7f5e4335cb05!
	I1207 20:03:04.945368       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13c59dd0-3cc9-4f7c-9875-695381f72f55", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-946218_e3713f22-599b-4515-b325-7f5e4335cb05 became leader
	I1207 20:03:05.046638       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-946218_e3713f22-599b-4515-b325-7f5e4335cb05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-946218 -n addons-946218
helpers_test.go:261: (dbg) Run:  kubectl --context addons-946218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (39.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (65.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-362953 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1207 20:14:16.130374    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-362953 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.554859796s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-362953 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-362953 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [66efc0c3-afa3-4d9e-a978-37a4f5e18ef6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [66efc0c3-afa3-4d9e-a978-37a4f5e18ef6] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 16.020557288s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-362953 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-362953 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-362953 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1207 20:14:43.816616    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.020074158s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-362953 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-362953 addons disable ingress-dns --alsologtostderr -v=1: (10.861237459s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-362953 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-362953 addons disable ingress --alsologtostderr -v=1: (7.557995944s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-362953
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-362953:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "067a32f94ddd2831c4bb0dac2a22b29bd1c15a0dd0fb6dc7066c5163ee7d3dd4",
	        "Created": "2023-12-07T20:12:20.722321816Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 55206,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-12-07T20:12:21.053306188Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:769b0b780370d646693e9d8a4170c38d193d2f33565406ee9066915c40e406d4",
	        "ResolvConfPath": "/var/lib/docker/containers/067a32f94ddd2831c4bb0dac2a22b29bd1c15a0dd0fb6dc7066c5163ee7d3dd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/067a32f94ddd2831c4bb0dac2a22b29bd1c15a0dd0fb6dc7066c5163ee7d3dd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/067a32f94ddd2831c4bb0dac2a22b29bd1c15a0dd0fb6dc7066c5163ee7d3dd4/hosts",
	        "LogPath": "/var/lib/docker/containers/067a32f94ddd2831c4bb0dac2a22b29bd1c15a0dd0fb6dc7066c5163ee7d3dd4/067a32f94ddd2831c4bb0dac2a22b29bd1c15a0dd0fb6dc7066c5163ee7d3dd4-json.log",
	        "Name": "/ingress-addon-legacy-362953",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-362953:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-362953",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc10904ba86a72c7584019c5c670f5974ec5393c0a2f9b6feff7a96f7b46fdde-init/diff:/var/lib/docker/overlay2/baac1057f1861dfdebb7423d9d7ad7a05f930e41cec62cfa33740325cb982d86/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc10904ba86a72c7584019c5c670f5974ec5393c0a2f9b6feff7a96f7b46fdde/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc10904ba86a72c7584019c5c670f5974ec5393c0a2f9b6feff7a96f7b46fdde/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc10904ba86a72c7584019c5c670f5974ec5393c0a2f9b6feff7a96f7b46fdde/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-362953",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-362953/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-362953",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-362953",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-362953",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "04ebd718f3fb17adfca39940b41727bf04b338c7b95f6238a972c3ab1952925f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/04ebd718f3fb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-362953": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "067a32f94ddd",
	                        "ingress-addon-legacy-362953"
	                    ],
	                    "NetworkID": "902b2f4629102800f1e43400a46cdebec7bfecd911561dad27f6ca33bdbe2e5e",
	                    "EndpointID": "237a50643227777686f32009750c926a1c16b31ece1d75131d7ac467ff6169e1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-362953 -n ingress-addon-legacy-362953
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-362953 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-362953 logs -n 25: (1.043260063s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-718233                     | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC |                     |
	|                | --kill=true                              |                             |         |         |                     |                     |
	| update-context | functional-718233                        | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-718233                        | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-718233                        | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-718233                        | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-718233                        | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-718233 ssh pgrep              | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-718233 image build -t         | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | localhost/my-image:functional-718233     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-718233 image ls               | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	| image          | functional-718233                        | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-718233                        | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-718233                     | functional-718233           | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	| start          | -p image-570451                          | image-570451                | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | --driver=docker                          |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-570451                | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-570451                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-570451                | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-570451                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-570451                | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-570451                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-570451                | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-570451                          |                             |         |         |                     |                     |
	| delete         | -p image-570451                          | image-570451                | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:11 UTC |
	| start          | -p ingress-addon-legacy-362953           | ingress-addon-legacy-362953 | jenkins | v1.32.0 | 07 Dec 23 20:11 UTC | 07 Dec 23 20:13 UTC |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                     |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-362953              | ingress-addon-legacy-362953 | jenkins | v1.32.0 | 07 Dec 23 20:13 UTC | 07 Dec 23 20:14 UTC |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-362953              | ingress-addon-legacy-362953 | jenkins | v1.32.0 | 07 Dec 23 20:14 UTC | 07 Dec 23 20:14 UTC |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-362953              | ingress-addon-legacy-362953 | jenkins | v1.32.0 | 07 Dec 23 20:14 UTC | 07 Dec 23 20:14 UTC |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-362953 ip           | ingress-addon-legacy-362953 | jenkins | v1.32.0 | 07 Dec 23 20:14 UTC | 07 Dec 23 20:14 UTC |
	| addons         | ingress-addon-legacy-362953              | ingress-addon-legacy-362953 | jenkins | v1.32.0 | 07 Dec 23 20:14 UTC | 07 Dec 23 20:15 UTC |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-362953              | ingress-addon-legacy-362953 | jenkins | v1.32.0 | 07 Dec 23 20:15 UTC | 07 Dec 23 20:15 UTC |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:11:58
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:11:58.782494   54749 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:11:58.782715   54749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:11:58.782740   54749 out.go:309] Setting ErrFile to fd 2...
	I1207 20:11:58.782759   54749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:11:58.783047   54749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	I1207 20:11:58.783509   54749 out.go:303] Setting JSON to false
	I1207 20:11:58.784468   54749 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":3262,"bootTime":1701976657,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1207 20:11:58.784573   54749 start.go:138] virtualization:  
	I1207 20:11:58.787045   54749 out.go:177] * [ingress-addon-legacy-362953] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1207 20:11:58.789532   54749 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:11:58.789624   54749 notify.go:220] Checking for updates...
	I1207 20:11:58.791124   54749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:11:58.793417   54749 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:11:58.795133   54749 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	I1207 20:11:58.796931   54749 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1207 20:11:58.798640   54749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:11:58.800662   54749 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:11:58.825141   54749 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1207 20:11:58.825246   54749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:11:58.907949   54749 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-07 20:11:58.898295828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:11:58.908049   54749 docker.go:295] overlay module found
	I1207 20:11:58.909999   54749 out.go:177] * Using the docker driver based on user configuration
	I1207 20:11:58.911503   54749 start.go:298] selected driver: docker
	I1207 20:11:58.911521   54749 start.go:902] validating driver "docker" against <nil>
	I1207 20:11:58.911534   54749 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:11:58.912158   54749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:11:58.989350   54749 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-07 20:11:58.980097074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:11:58.989511   54749 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 20:11:58.989739   54749 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 20:11:58.991765   54749 out.go:177] * Using Docker driver with root privileges
	I1207 20:11:58.993766   54749 cni.go:84] Creating CNI manager for ""
	I1207 20:11:58.993817   54749 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 20:11:58.993836   54749 start_flags.go:323] config:
	{Name:ingress-addon-legacy-362953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-362953 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:11:58.995673   54749 out.go:177] * Starting control plane node ingress-addon-legacy-362953 in cluster ingress-addon-legacy-362953
	I1207 20:11:58.997147   54749 cache.go:121] Beginning downloading kic base image for docker with docker
	I1207 20:11:58.998825   54749 out.go:177] * Pulling base image ...
	I1207 20:11:59.000505   54749 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1207 20:11:59.000769   54749 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local docker daemon
	I1207 20:11:59.019387   54749 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local docker daemon, skipping pull
	I1207 20:11:59.019416   54749 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c exists in daemon, skipping load
	I1207 20:11:59.063405   54749 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1207 20:11:59.063442   54749 cache.go:56] Caching tarball of preloaded images
	I1207 20:11:59.063612   54749 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1207 20:11:59.065715   54749 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1207 20:11:59.067719   54749 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1207 20:11:59.184420   54749 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1207 20:12:13.333847   54749 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1207 20:12:13.333957   54749 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1207 20:12:14.442365   54749 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1207 20:12:14.442734   54749 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/config.json ...
	I1207 20:12:14.442770   54749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/config.json: {Name:mkfa488b05be9562276370dbd3f78e4e4ed69389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:12:14.442961   54749 cache.go:194] Successfully downloaded all kic artifacts
	I1207 20:12:14.443021   54749 start.go:365] acquiring machines lock for ingress-addon-legacy-362953: {Name:mkf7f965db84fd0a48a83b1c743f320f013eb710 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:12:14.443079   54749 start.go:369] acquired machines lock for "ingress-addon-legacy-362953" in 44.84µs
	I1207 20:12:14.443104   54749 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-362953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-362953 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 20:12:14.443177   54749 start.go:125] createHost starting for "" (driver="docker")
	I1207 20:12:14.445346   54749 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1207 20:12:14.445566   54749 start.go:159] libmachine.API.Create for "ingress-addon-legacy-362953" (driver="docker")
	I1207 20:12:14.445607   54749 client.go:168] LocalClient.Create starting
	I1207 20:12:14.445675   54749 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem
	I1207 20:12:14.445709   54749 main.go:141] libmachine: Decoding PEM data...
	I1207 20:12:14.445729   54749 main.go:141] libmachine: Parsing certificate...
	I1207 20:12:14.445801   54749 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem
	I1207 20:12:14.445825   54749 main.go:141] libmachine: Decoding PEM data...
	I1207 20:12:14.445843   54749 main.go:141] libmachine: Parsing certificate...
	I1207 20:12:14.446179   54749 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-362953 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1207 20:12:14.463548   54749 cli_runner.go:211] docker network inspect ingress-addon-legacy-362953 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1207 20:12:14.463631   54749 network_create.go:281] running [docker network inspect ingress-addon-legacy-362953] to gather additional debugging logs...
	I1207 20:12:14.463650   54749 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-362953
	W1207 20:12:14.482865   54749 cli_runner.go:211] docker network inspect ingress-addon-legacy-362953 returned with exit code 1
	I1207 20:12:14.482899   54749 network_create.go:284] error running [docker network inspect ingress-addon-legacy-362953]: docker network inspect ingress-addon-legacy-362953: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-362953 not found
	I1207 20:12:14.482913   54749 network_create.go:286] output of [docker network inspect ingress-addon-legacy-362953]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-362953 not found
	
	** /stderr **
	I1207 20:12:14.483025   54749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 20:12:14.500782   54749 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400052c2c0}
	I1207 20:12:14.500828   54749 network_create.go:124] attempt to create docker network ingress-addon-legacy-362953 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1207 20:12:14.500889   54749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-362953 ingress-addon-legacy-362953
	I1207 20:12:14.572514   54749 network_create.go:108] docker network ingress-addon-legacy-362953 192.168.49.0/24 created
	I1207 20:12:14.572544   54749 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-362953" container
	I1207 20:12:14.572611   54749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1207 20:12:14.589849   54749 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-362953 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-362953 --label created_by.minikube.sigs.k8s.io=true
	I1207 20:12:14.608914   54749 oci.go:103] Successfully created a docker volume ingress-addon-legacy-362953
	I1207 20:12:14.608998   54749 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-362953-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-362953 --entrypoint /usr/bin/test -v ingress-addon-legacy-362953:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c -d /var/lib
	I1207 20:12:15.981448   54749 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-362953-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-362953 --entrypoint /usr/bin/test -v ingress-addon-legacy-362953:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c -d /var/lib: (1.372412944s)
	I1207 20:12:15.981482   54749 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-362953
	I1207 20:12:15.981499   54749 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1207 20:12:15.981518   54749 kic.go:194] Starting extracting preloaded images to volume ...
	I1207 20:12:15.981605   54749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-362953:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c -I lz4 -xf /preloaded.tar -C /extractDir
	I1207 20:12:20.631638   54749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-362953:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c -I lz4 -xf /preloaded.tar -C /extractDir: (4.649993216s)
	I1207 20:12:20.631669   54749 kic.go:203] duration metric: took 4.650149 seconds to extract preloaded images to volume
	W1207 20:12:20.631799   54749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1207 20:12:20.631913   54749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1207 20:12:20.706524   54749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-362953 --name ingress-addon-legacy-362953 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-362953 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-362953 --network ingress-addon-legacy-362953 --ip 192.168.49.2 --volume ingress-addon-legacy-362953:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c
	I1207 20:12:21.063950   54749 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-362953 --format={{.State.Running}}
	I1207 20:12:21.108256   54749 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-362953 --format={{.State.Status}}
	I1207 20:12:21.136630   54749 cli_runner.go:164] Run: docker exec ingress-addon-legacy-362953 stat /var/lib/dpkg/alternatives/iptables
	I1207 20:12:21.222716   54749 oci.go:144] the created container "ingress-addon-legacy-362953" has a running status.
	I1207 20:12:21.222746   54749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa...
	I1207 20:12:21.899932   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1207 20:12:21.899983   54749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1207 20:12:21.925604   54749 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-362953 --format={{.State.Status}}
	I1207 20:12:21.967486   54749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1207 20:12:21.967511   54749 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-362953 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1207 20:12:22.076763   54749 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-362953 --format={{.State.Status}}
	I1207 20:12:22.106496   54749 machine.go:88] provisioning docker machine ...
	I1207 20:12:22.106533   54749 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-362953"
	I1207 20:12:22.106606   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:22.140919   54749 main.go:141] libmachine: Using SSH client type: native
	I1207 20:12:22.141448   54749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1207 20:12:22.141465   54749 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-362953 && echo "ingress-addon-legacy-362953" | sudo tee /etc/hostname
	I1207 20:12:22.314021   54749 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-362953
	
	I1207 20:12:22.314105   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:22.343633   54749 main.go:141] libmachine: Using SSH client type: native
	I1207 20:12:22.344046   54749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1207 20:12:22.344070   54749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-362953' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-362953/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-362953' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:12:22.478163   54749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:12:22.478189   54749 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17719-2292/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-2292/.minikube}
	I1207 20:12:22.478207   54749 ubuntu.go:177] setting up certificates
	I1207 20:12:22.478215   54749 provision.go:83] configureAuth start
	I1207 20:12:22.478273   54749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-362953
	I1207 20:12:22.498760   54749 provision.go:138] copyHostCerts
	I1207 20:12:22.498801   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17719-2292/.minikube/ca.pem
	I1207 20:12:22.498833   54749 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-2292/.minikube/ca.pem, removing ...
	I1207 20:12:22.498839   54749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.pem
	I1207 20:12:22.498918   54749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/ca.pem (1078 bytes)
	I1207 20:12:22.499000   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17719-2292/.minikube/cert.pem
	I1207 20:12:22.499017   54749 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-2292/.minikube/cert.pem, removing ...
	I1207 20:12:22.499021   54749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-2292/.minikube/cert.pem
	I1207 20:12:22.499048   54749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/cert.pem (1123 bytes)
	I1207 20:12:22.499097   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17719-2292/.minikube/key.pem
	I1207 20:12:22.499113   54749 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-2292/.minikube/key.pem, removing ...
	I1207 20:12:22.499117   54749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-2292/.minikube/key.pem
	I1207 20:12:22.499143   54749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/key.pem (1679 bytes)
	I1207 20:12:22.499196   54749 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-362953 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-362953]
	I1207 20:12:22.831764   54749 provision.go:172] copyRemoteCerts
	I1207 20:12:22.831890   54749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:12:22.831941   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:22.853114   54749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa Username:docker}
	I1207 20:12:22.947242   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 20:12:22.947304   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1207 20:12:22.976038   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 20:12:22.976098   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1207 20:12:23.007177   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 20:12:23.007268   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 20:12:23.035949   54749 provision.go:86] duration metric: configureAuth took 557.720545ms
	I1207 20:12:23.035974   54749 ubuntu.go:193] setting minikube options for container-runtime
	I1207 20:12:23.036168   54749 config.go:182] Loaded profile config "ingress-addon-legacy-362953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1207 20:12:23.036238   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:23.054252   54749 main.go:141] libmachine: Using SSH client type: native
	I1207 20:12:23.054646   54749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1207 20:12:23.054661   54749 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1207 20:12:23.182436   54749 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1207 20:12:23.182457   54749 ubuntu.go:71] root file system type: overlay
	I1207 20:12:23.182582   54749 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1207 20:12:23.182657   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:23.200126   54749 main.go:141] libmachine: Using SSH client type: native
	I1207 20:12:23.200539   54749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1207 20:12:23.200623   54749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1207 20:12:23.340218   54749 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1207 20:12:23.340374   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:23.359453   54749 main.go:141] libmachine: Using SSH client type: native
	I1207 20:12:23.359881   54749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1207 20:12:23.359899   54749 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1207 20:12:24.175711   54749 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-12-07 20:12:23.335451307 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1207 20:12:24.175760   54749 machine.go:91] provisioned docker machine in 2.069232207s
	I1207 20:12:24.175791   54749 client.go:171] LocalClient.Create took 9.73017523s
	I1207 20:12:24.175804   54749 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-362953" took 9.730237548s
	I1207 20:12:24.175812   54749 start.go:300] post-start starting for "ingress-addon-legacy-362953" (driver="docker")
	I1207 20:12:24.175822   54749 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:12:24.175886   54749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:12:24.175934   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:24.195267   54749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa Username:docker}
	I1207 20:12:24.287689   54749 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:12:24.291821   54749 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 20:12:24.291901   54749 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1207 20:12:24.291917   54749 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1207 20:12:24.291925   54749 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1207 20:12:24.291935   54749 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-2292/.minikube/addons for local assets ...
	I1207 20:12:24.291997   54749 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-2292/.minikube/files for local assets ...
	I1207 20:12:24.292089   54749 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem -> 76002.pem in /etc/ssl/certs
	I1207 20:12:24.292101   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem -> /etc/ssl/certs/76002.pem
	I1207 20:12:24.292205   54749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:12:24.302606   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem --> /etc/ssl/certs/76002.pem (1708 bytes)
	I1207 20:12:24.329869   54749 start.go:303] post-start completed in 154.042874ms
	I1207 20:12:24.330228   54749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-362953
	I1207 20:12:24.348152   54749 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/config.json ...
	I1207 20:12:24.348418   54749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 20:12:24.348471   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:24.366267   54749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa Username:docker}
	I1207 20:12:24.454655   54749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 20:12:24.460880   54749 start.go:128] duration metric: createHost completed in 10.017688593s
	I1207 20:12:24.460943   54749 start.go:83] releasing machines lock for "ingress-addon-legacy-362953", held for 10.017849986s
	I1207 20:12:24.461043   54749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-362953
	I1207 20:12:24.478665   54749 ssh_runner.go:195] Run: cat /version.json
	I1207 20:12:24.478727   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:24.478901   54749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:12:24.478955   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:12:24.498694   54749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa Username:docker}
	I1207 20:12:24.507299   54749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa Username:docker}
	I1207 20:12:24.592973   54749 ssh_runner.go:195] Run: systemctl --version
	I1207 20:12:24.730389   54749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1207 20:12:24.735797   54749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1207 20:12:24.765434   54749 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1207 20:12:24.765526   54749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1207 20:12:24.785330   54749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1207 20:12:24.805025   54749 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 20:12:24.805054   54749 start.go:475] detecting cgroup driver to use...
	I1207 20:12:24.805086   54749 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1207 20:12:24.805198   54749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:12:24.824809   54749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1207 20:12:24.837673   54749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 20:12:24.849433   54749 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1207 20:12:24.849502   54749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1207 20:12:24.861138   54749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 20:12:24.872130   54749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 20:12:24.883674   54749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 20:12:24.895184   54749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:12:24.906177   54749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 20:12:24.917781   54749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:12:24.927588   54749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:12:24.937683   54749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:12:25.034788   54749 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 20:12:25.161318   54749 start.go:475] detecting cgroup driver to use...
	I1207 20:12:25.161376   54749 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1207 20:12:25.161447   54749 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1207 20:12:25.182609   54749 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1207 20:12:25.182753   54749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 20:12:25.197604   54749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:12:25.220556   54749 ssh_runner.go:195] Run: which cri-dockerd
	I1207 20:12:25.225853   54749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1207 20:12:25.237340   54749 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1207 20:12:25.263716   54749 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1207 20:12:25.378136   54749 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1207 20:12:25.489071   54749 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1207 20:12:25.489236   54749 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1207 20:12:25.514192   54749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:12:25.628005   54749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 20:12:25.916275   54749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 20:12:25.943922   54749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 20:12:25.975180   54749 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1207 20:12:25.975305   54749 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-362953 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 20:12:25.992700   54749 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1207 20:12:25.997482   54749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:12:26.015827   54749 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1207 20:12:26.015900   54749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 20:12:26.038339   54749 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1207 20:12:26.038364   54749 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1207 20:12:26.038424   54749 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 20:12:26.049441   54749 ssh_runner.go:195] Run: which lz4
	I1207 20:12:26.053998   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1207 20:12:26.054098   54749 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 20:12:26.058571   54749 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:12:26.058609   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1207 20:12:28.012042   54749 docker.go:635] Took 1.957977 seconds to copy over tarball
	I1207 20:12:28.012130   54749 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 20:12:30.463065   54749 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.450904533s)
	I1207 20:12:30.463090   54749 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 20:12:30.545640   54749 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 20:12:30.556666   54749 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1207 20:12:30.578650   54749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:12:30.680482   54749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 20:12:32.153969   54749 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.473451777s)
	I1207 20:12:32.154062   54749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 20:12:32.175615   54749 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1207 20:12:32.175637   54749 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1207 20:12:32.175646   54749 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 20:12:32.178005   54749 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1207 20:12:32.178122   54749 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1207 20:12:32.178153   54749 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:12:32.178224   54749 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:12:32.178285   54749 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:12:32.178290   54749 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1207 20:12:32.178345   54749 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:12:32.178390   54749 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:12:32.179873   54749 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:12:32.179893   54749 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1207 20:12:32.179946   54749 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:12:32.179982   54749 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1207 20:12:32.180012   54749 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:12:32.180171   54749 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1207 20:12:32.180300   54749 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:12:32.180311   54749 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:12:32.566498   54749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1207 20:12:32.576208   54749 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	W1207 20:12:32.576542   54749 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1207 20:12:32.577008   54749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:12:32.577068   54749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1207 20:12:32.583930   54749 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1207 20:12:32.584122   54749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1207 20:12:32.584472   54749 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1207 20:12:32.584585   54749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1207 20:12:32.590583   54749 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1207 20:12:32.590815   54749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:12:32.591109   54749 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1207 20:12:32.591173   54749 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1207 20:12:32.591234   54749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	W1207 20:12:32.595654   54749 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1207 20:12:32.595876   54749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1207 20:12:32.655851   54749 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1207 20:12:32.655893   54749 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:12:32.655963   54749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1207 20:12:32.656414   54749 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1207 20:12:32.656475   54749 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:12:32.656533   54749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1207 20:12:32.657739   54749 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1207 20:12:32.657809   54749 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:12:32.657872   54749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 20:12:32.657968   54749 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1207 20:12:32.658005   54749 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1207 20:12:32.658062   54749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1207 20:12:32.691145   54749 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1207 20:12:32.691222   54749 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1207 20:12:32.691254   54749 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:12:32.691330   54749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1207 20:12:32.693959   54749 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1207 20:12:32.694022   54749 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1207 20:12:32.694074   54749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1207 20:12:32.726015   54749 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1207 20:12:32.726354   54749 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1207 20:12:32.733145   54749 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1207 20:12:32.746915   54749 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1207 20:12:32.758237   54749 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1207 20:12:32.763831   54749 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W1207 20:12:32.844423   54749 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1207 20:12:32.844615   54749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:12:32.866617   54749 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1207 20:12:32.866701   54749 docker.go:323] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:12:32.866773   54749 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:12:32.897986   54749 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 20:12:32.898057   54749 cache_images.go:92] LoadImages completed in 722.399393ms
	W1207 20:12:32.898114   54749 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17719-2292/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1207 20:12:32.898170   54749 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1207 20:12:32.958995   54749 cni.go:84] Creating CNI manager for ""
	I1207 20:12:32.959021   54749 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 20:12:32.959616   54749 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:12:32.959654   54749 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-362953 NodeName:ingress-addon-legacy-362953 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1207 20:12:32.959798   54749 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-362953"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:12:32.959866   54749 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-362953 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-362953 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:12:32.959932   54749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1207 20:12:32.970490   54749 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:12:32.970598   54749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 20:12:32.981176   54749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1207 20:12:33.007189   54749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1207 20:12:33.031225   54749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1207 20:12:33.054237   54749 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1207 20:12:33.059072   54749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:12:33.074549   54749 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953 for IP: 192.168.49.2
	I1207 20:12:33.074583   54749 certs.go:190] acquiring lock for shared ca certs: {Name:mkf0aeb9e21068cbc2b0de52461bf1fef9a8e437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:12:33.074726   54749 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key
	I1207 20:12:33.074777   54749 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key
	I1207 20:12:33.074836   54749 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.key
	I1207 20:12:33.074854   54749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt with IP's: []
	I1207 20:12:33.312947   54749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt ...
	I1207 20:12:33.312980   54749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: {Name:mk812835b2839e3a04df7cd2340f12f0fb6940dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:12:33.313193   54749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.key ...
	I1207 20:12:33.313208   54749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.key: {Name:mk7ee9ef131108e38cea4b20980946147f6db833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:12:33.313303   54749 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.key.dd3b5fb2
	I1207 20:12:33.313320   54749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 20:12:34.450446   54749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.crt.dd3b5fb2 ...
	I1207 20:12:34.450479   54749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.crt.dd3b5fb2: {Name:mk7c7f7595d00864d7ed82d7a31077d7d22b2bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:12:34.450671   54749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.key.dd3b5fb2 ...
	I1207 20:12:34.450685   54749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.key.dd3b5fb2: {Name:mkff5f82c3989b4ac99104e78949ebfe7211b6e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:12:34.450770   54749 certs.go:337] copying /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.crt
	I1207 20:12:34.450849   54749 certs.go:341] copying /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.key
	I1207 20:12:34.450914   54749 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.key
	I1207 20:12:34.450932   54749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.crt with IP's: []
	I1207 20:12:35.141969   54749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.crt ...
	I1207 20:12:35.142001   54749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.crt: {Name:mk3d489a31ee65302cc78fdd1cd07d67c745b5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:12:35.142189   54749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.key ...
	I1207 20:12:35.142204   54749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.key: {Name:mk4212857fb4eaff2d796ebaac3204702d3590bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:12:35.142287   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 20:12:35.142317   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 20:12:35.142333   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 20:12:35.142348   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 20:12:35.142363   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 20:12:35.142375   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 20:12:35.142393   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 20:12:35.142408   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 20:12:35.142467   54749 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/7600.pem (1338 bytes)
	W1207 20:12:35.142509   54749 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/7600_empty.pem, impossibly tiny 0 bytes
	I1207 20:12:35.142519   54749 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 20:12:35.142548   54749 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem (1078 bytes)
	I1207 20:12:35.142577   54749 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:12:35.142609   54749 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem (1679 bytes)
	I1207 20:12:35.142662   54749 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem (1708 bytes)
	I1207 20:12:35.142694   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem -> /usr/share/ca-certificates/76002.pem
	I1207 20:12:35.142709   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:12:35.142723   54749 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/7600.pem -> /usr/share/ca-certificates/7600.pem
	I1207 20:12:35.143293   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 20:12:35.177176   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 20:12:35.208257   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 20:12:35.237611   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 20:12:35.266369   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:12:35.295590   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 20:12:35.323886   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:12:35.352985   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:12:35.381866   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem --> /usr/share/ca-certificates/76002.pem (1708 bytes)
	I1207 20:12:35.411275   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:12:35.439622   54749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/certs/7600.pem --> /usr/share/ca-certificates/7600.pem (1338 bytes)
	I1207 20:12:35.468346   54749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 20:12:35.489233   54749 ssh_runner.go:195] Run: openssl version
	I1207 20:12:35.496035   54749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7600.pem && ln -fs /usr/share/ca-certificates/7600.pem /etc/ssl/certs/7600.pem"
	I1207 20:12:35.507111   54749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7600.pem
	I1207 20:12:35.511448   54749 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:07 /usr/share/ca-certificates/7600.pem
	I1207 20:12:35.511546   54749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7600.pem
	I1207 20:12:35.519962   54749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7600.pem /etc/ssl/certs/51391683.0"
	I1207 20:12:35.531160   54749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76002.pem && ln -fs /usr/share/ca-certificates/76002.pem /etc/ssl/certs/76002.pem"
	I1207 20:12:35.542476   54749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76002.pem
	I1207 20:12:35.546958   54749 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:07 /usr/share/ca-certificates/76002.pem
	I1207 20:12:35.547023   54749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76002.pem
	I1207 20:12:35.555330   54749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76002.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:12:35.566697   54749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:12:35.577809   54749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:12:35.582701   54749 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:12:35.582814   54749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:12:35.591499   54749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:12:35.602730   54749 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:12:35.606986   54749 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 20:12:35.607078   54749 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-362953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-362953 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:12:35.607206   54749 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 20:12:35.626702   54749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 20:12:35.637144   54749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 20:12:35.647436   54749 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1207 20:12:35.647537   54749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 20:12:35.658108   54749 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 20:12:35.658150   54749 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1207 20:12:35.713253   54749 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1207 20:12:35.713493   54749 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 20:12:35.934458   54749 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1207 20:12:35.934549   54749 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1207 20:12:35.934604   54749 kubeadm.go:322] DOCKER_VERSION: 24.0.7
	I1207 20:12:35.934645   54749 kubeadm.go:322] OS: Linux
	I1207 20:12:35.934695   54749 kubeadm.go:322] CGROUPS_CPU: enabled
	I1207 20:12:35.934748   54749 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1207 20:12:35.934815   54749 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1207 20:12:35.934869   54749 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1207 20:12:35.934923   54749 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1207 20:12:35.934975   54749 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1207 20:12:36.045126   54749 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 20:12:36.045236   54749 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 20:12:36.045329   54749 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 20:12:36.267378   54749 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 20:12:36.268959   54749 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 20:12:36.269223   54749 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 20:12:36.377260   54749 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 20:12:36.381552   54749 out.go:204]   - Generating certificates and keys ...
	I1207 20:12:36.381730   54749 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 20:12:36.381815   54749 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 20:12:36.739357   54749 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 20:12:36.939959   54749 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 20:12:37.791119   54749 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 20:12:38.565651   54749 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 20:12:38.865730   54749 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 20:12:38.866363   54749 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-362953 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 20:12:39.058621   54749 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 20:12:39.058984   54749 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-362953 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1207 20:12:39.532789   54749 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 20:12:40.395989   54749 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 20:12:41.743453   54749 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 20:12:41.743806   54749 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 20:12:42.214068   54749 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 20:12:42.434405   54749 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 20:12:42.978962   54749 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 20:12:43.679357   54749 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 20:12:43.680813   54749 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 20:12:43.683231   54749 out.go:204]   - Booting up control plane ...
	I1207 20:12:43.683332   54749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 20:12:43.690962   54749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 20:12:43.692659   54749 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 20:12:43.693869   54749 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 20:12:43.696540   54749 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 20:12:55.699163   54749 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002468 seconds
	I1207 20:12:55.699277   54749 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 20:12:55.712649   54749 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 20:12:56.233997   54749 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 20:12:56.234154   54749 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-362953 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1207 20:12:56.742437   54749 kubeadm.go:322] [bootstrap-token] Using token: yko3qg.3cwnd4vf8yvxc722
	I1207 20:12:56.744248   54749 out.go:204]   - Configuring RBAC rules ...
	I1207 20:12:56.744375   54749 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 20:12:56.750988   54749 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 20:12:56.763998   54749 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 20:12:56.767069   54749 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 20:12:56.770108   54749 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 20:12:56.773126   54749 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 20:12:56.782932   54749 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 20:12:57.094940   54749 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 20:12:57.176669   54749 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 20:12:57.176725   54749 kubeadm.go:322] 
	I1207 20:12:57.176784   54749 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 20:12:57.176799   54749 kubeadm.go:322] 
	I1207 20:12:57.176872   54749 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 20:12:57.176881   54749 kubeadm.go:322] 
	I1207 20:12:57.176905   54749 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 20:12:57.176964   54749 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 20:12:57.177015   54749 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 20:12:57.177023   54749 kubeadm.go:322] 
	I1207 20:12:57.177072   54749 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 20:12:57.177147   54749 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 20:12:57.177214   54749 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 20:12:57.177221   54749 kubeadm.go:322] 
	I1207 20:12:57.177299   54749 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 20:12:57.177373   54749 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 20:12:57.177381   54749 kubeadm.go:322] 
	I1207 20:12:57.177471   54749 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token yko3qg.3cwnd4vf8yvxc722 \
	I1207 20:12:57.177576   54749 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bf03bebb018fea717c072634f3af28c80686bb1a7a8d0c481a3a9bb717d143b1 \
	I1207 20:12:57.177602   54749 kubeadm.go:322]     --control-plane 
	I1207 20:12:57.177610   54749 kubeadm.go:322] 
	I1207 20:12:57.177697   54749 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 20:12:57.177706   54749 kubeadm.go:322] 
	I1207 20:12:57.177782   54749 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token yko3qg.3cwnd4vf8yvxc722 \
	I1207 20:12:57.177891   54749 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bf03bebb018fea717c072634f3af28c80686bb1a7a8d0c481a3a9bb717d143b1 
	I1207 20:12:57.180639   54749 kubeadm.go:322] W1207 20:12:35.712574    1659 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1207 20:12:57.180861   54749 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1207 20:12:57.181006   54749 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1207 20:12:57.181247   54749 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1207 20:12:57.181366   54749 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 20:12:57.181521   54749 kubeadm.go:322] W1207 20:12:43.690829    1659 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1207 20:12:57.181666   54749 kubeadm.go:322] W1207 20:12:43.692545    1659 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1207 20:12:57.181692   54749 cni.go:84] Creating CNI manager for ""
	I1207 20:12:57.181710   54749 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 20:12:57.181731   54749 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 20:12:57.181845   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:12:57.181901   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=ingress-addon-legacy-362953 minikube.k8s.io/updated_at=2023_12_07T20_12_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:12:57.660655   54749 ops.go:34] apiserver oom_adj: -16
	I1207 20:12:57.660772   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:12:57.757394   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:12:58.352158   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:12:58.852057   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:12:59.351625   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:12:59.852322   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:00.351901   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:00.852219   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:01.352203   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:01.851642   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:02.351936   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:02.851574   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:03.352411   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:03.851607   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:04.351814   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:04.852167   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:05.352235   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:05.851631   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:06.352453   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:06.852260   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:07.352428   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:07.852204   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:08.352370   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:08.851556   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:09.351624   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:09.851617   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:10.352222   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:10.852152   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:11.352093   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:11.851632   54749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 20:13:11.972574   54749 kubeadm.go:1088] duration metric: took 14.790780702s to wait for elevateKubeSystemPrivileges.
	I1207 20:13:11.972606   54749 kubeadm.go:406] StartCluster complete in 36.365531887s
	I1207 20:13:11.972623   54749 settings.go:142] acquiring lock: {Name:mk4e1ad85078db32f53ce2cb878f95b1dc79d720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:13:11.972679   54749 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:13:11.973437   54749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/kubeconfig: {Name:mkb58bbc3586feb84db8c4c89653a5136ccfc407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:13:11.974132   54749 kapi.go:59] client config for ingress-addon-legacy-362953: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.key", CAFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:13:11.975234   54749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 20:13:11.975467   54749 config.go:182] Loaded profile config "ingress-addon-legacy-362953": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1207 20:13:11.975525   54749 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 20:13:11.975581   54749 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-362953"
	I1207 20:13:11.975596   54749 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-362953"
	I1207 20:13:11.975632   54749 host.go:66] Checking if "ingress-addon-legacy-362953" exists ...
	I1207 20:13:11.976078   54749 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-362953 --format={{.State.Status}}
	I1207 20:13:11.976712   54749 cert_rotation.go:137] Starting client certificate rotation controller
	I1207 20:13:11.976757   54749 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-362953"
	I1207 20:13:11.976773   54749 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-362953"
	I1207 20:13:11.977028   54749 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-362953 --format={{.State.Status}}
	I1207 20:13:12.024512   54749 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-362953" context rescaled to 1 replicas
	I1207 20:13:12.024556   54749 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 20:13:12.026960   54749 out.go:177] * Verifying Kubernetes components...
	I1207 20:13:12.030397   54749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:13:12.058976   54749 kapi.go:59] client config for ingress-addon-legacy-362953: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.key", CAFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:13:12.059238   54749 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-362953"
	I1207 20:13:12.059272   54749 host.go:66] Checking if "ingress-addon-legacy-362953" exists ...
	I1207 20:13:12.059745   54749 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-362953 --format={{.State.Status}}
	I1207 20:13:12.062944   54749 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:13:12.065198   54749 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:13:12.065224   54749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 20:13:12.065287   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:13:12.102671   54749 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 20:13:12.102694   54749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 20:13:12.102768   54749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-362953
	I1207 20:13:12.120689   54749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa Username:docker}
	I1207 20:13:12.147410   54749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/ingress-addon-legacy-362953/id_rsa Username:docker}
	I1207 20:13:12.283071   54749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 20:13:12.283834   54749 kapi.go:59] client config for ingress-addon-legacy-362953: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.key", CAFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:13:12.284133   54749 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-362953" to be "Ready" ...
	I1207 20:13:12.288385   54749 node_ready.go:49] node "ingress-addon-legacy-362953" has status "Ready":"True"
	I1207 20:13:12.288408   54749 node_ready.go:38] duration metric: took 4.255037ms waiting for node "ingress-addon-legacy-362953" to be "Ready" ...
	I1207 20:13:12.288427   54749 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:13:12.297019   54749 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-4t24t" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:12.308815   54749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:13:12.385680   54749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 20:13:12.988347   54749 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1207 20:13:13.255578   54749 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1207 20:13:13.258263   54749 addons.go:502] enable addons completed in 1.282741066s: enabled=[storage-provisioner default-storageclass]
	I1207 20:13:14.315278   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:16.316232   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:18.815887   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:21.315020   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:23.315215   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:25.315256   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:27.315855   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:29.815447   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:32.315225   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:34.315343   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:36.315793   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:38.315971   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:40.816100   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:43.314992   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:45.317327   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:47.318484   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:49.815768   54749 pod_ready.go:102] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"False"
	I1207 20:13:50.815730   54749 pod_ready.go:92] pod "coredns-66bff467f8-4t24t" in "kube-system" namespace has status "Ready":"True"
	I1207 20:13:50.815755   54749 pod_ready.go:81] duration metric: took 38.518696332s waiting for pod "coredns-66bff467f8-4t24t" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:50.815766   54749 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-kr85q" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:50.817610   54749 pod_ready.go:97] error getting pod "coredns-66bff467f8-kr85q" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-kr85q" not found
	I1207 20:13:50.817632   54749 pod_ready.go:81] duration metric: took 1.858955ms waiting for pod "coredns-66bff467f8-kr85q" in "kube-system" namespace to be "Ready" ...
	E1207 20:13:50.817641   54749 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-kr85q" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-kr85q" not found
	I1207 20:13:50.817650   54749 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-362953" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:50.821944   54749 pod_ready.go:92] pod "etcd-ingress-addon-legacy-362953" in "kube-system" namespace has status "Ready":"True"
	I1207 20:13:50.821971   54749 pod_ready.go:81] duration metric: took 4.309528ms waiting for pod "etcd-ingress-addon-legacy-362953" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:50.821983   54749 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-362953" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:50.826660   54749 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-362953" in "kube-system" namespace has status "Ready":"True"
	I1207 20:13:50.826687   54749 pod_ready.go:81] duration metric: took 4.696544ms waiting for pod "kube-apiserver-ingress-addon-legacy-362953" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:50.826699   54749 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-362953" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:50.831546   54749 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-362953" in "kube-system" namespace has status "Ready":"True"
	I1207 20:13:50.831573   54749 pod_ready.go:81] duration metric: took 4.865387ms waiting for pod "kube-controller-manager-ingress-addon-legacy-362953" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:50.831584   54749 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-362953" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:51.011129   54749 request.go:629] Waited for 177.350309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-362953
	I1207 20:13:51.014304   54749 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-362953" in "kube-system" namespace has status "Ready":"True"
	I1207 20:13:51.014332   54749 pod_ready.go:81] duration metric: took 182.740394ms waiting for pod "kube-scheduler-ingress-addon-legacy-362953" in "kube-system" namespace to be "Ready" ...
	I1207 20:13:51.014343   54749 pod_ready.go:38] duration metric: took 38.725896134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 20:13:51.014362   54749 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:13:51.014430   54749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:13:51.051747   54749 api_server.go:72] duration metric: took 39.027157898s to wait for apiserver process to appear ...
	I1207 20:13:51.051780   54749 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:13:51.051815   54749 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1207 20:13:51.062035   54749 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1207 20:13:51.063235   54749 api_server.go:141] control plane version: v1.18.20
	I1207 20:13:51.063283   54749 api_server.go:131] duration metric: took 11.488242ms to wait for apiserver health ...
	I1207 20:13:51.063296   54749 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:13:51.211696   54749 request.go:629] Waited for 148.303509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1207 20:13:51.217432   54749 system_pods.go:59] 7 kube-system pods found
	I1207 20:13:51.217461   54749 system_pods.go:61] "coredns-66bff467f8-4t24t" [0915acf1-8328-45fa-b01f-cb6cdcc311e4] Running
	I1207 20:13:51.217467   54749 system_pods.go:61] "etcd-ingress-addon-legacy-362953" [af02e583-e926-46d3-b1d0-cbfa71e378c0] Running
	I1207 20:13:51.217473   54749 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-362953" [2ee28f2d-cbfa-49e5-8f19-ce836cc81ff0] Running
	I1207 20:13:51.217478   54749 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-362953" [c3985cd9-f973-4e99-9e70-fb79cc8a0012] Running
	I1207 20:13:51.217483   54749 system_pods.go:61] "kube-proxy-qc8k9" [53d40163-d84b-477d-9da0-a0d90709682f] Running
	I1207 20:13:51.217489   54749 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-362953" [a3b03431-e51a-441c-b927-60e34aabc154] Running
	I1207 20:13:51.217497   54749 system_pods.go:61] "storage-provisioner" [8513bd2d-c058-49de-b8d9-81501f72b025] Running
	I1207 20:13:51.217505   54749 system_pods.go:74] duration metric: took 154.191772ms to wait for pod list to return data ...
	I1207 20:13:51.217520   54749 default_sa.go:34] waiting for default service account to be created ...
	I1207 20:13:51.410816   54749 request.go:629] Waited for 193.231253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1207 20:13:51.413224   54749 default_sa.go:45] found service account: "default"
	I1207 20:13:51.413251   54749 default_sa.go:55] duration metric: took 195.723237ms for default service account to be created ...
	I1207 20:13:51.413261   54749 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 20:13:51.611640   54749 request.go:629] Waited for 198.319129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1207 20:13:51.617144   54749 system_pods.go:86] 7 kube-system pods found
	I1207 20:13:51.617183   54749 system_pods.go:89] "coredns-66bff467f8-4t24t" [0915acf1-8328-45fa-b01f-cb6cdcc311e4] Running
	I1207 20:13:51.617191   54749 system_pods.go:89] "etcd-ingress-addon-legacy-362953" [af02e583-e926-46d3-b1d0-cbfa71e378c0] Running
	I1207 20:13:51.617196   54749 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-362953" [2ee28f2d-cbfa-49e5-8f19-ce836cc81ff0] Running
	I1207 20:13:51.617202   54749 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-362953" [c3985cd9-f973-4e99-9e70-fb79cc8a0012] Running
	I1207 20:13:51.617207   54749 system_pods.go:89] "kube-proxy-qc8k9" [53d40163-d84b-477d-9da0-a0d90709682f] Running
	I1207 20:13:51.617212   54749 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-362953" [a3b03431-e51a-441c-b927-60e34aabc154] Running
	I1207 20:13:51.617217   54749 system_pods.go:89] "storage-provisioner" [8513bd2d-c058-49de-b8d9-81501f72b025] Running
	I1207 20:13:51.617225   54749 system_pods.go:126] duration metric: took 203.958672ms to wait for k8s-apps to be running ...
	I1207 20:13:51.617238   54749 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 20:13:51.617291   54749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:13:51.633660   54749 system_svc.go:56] duration metric: took 16.412271ms WaitForService to wait for kubelet.
	I1207 20:13:51.633689   54749 kubeadm.go:581] duration metric: took 39.609108607s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 20:13:51.633708   54749 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:13:51.811078   54749 request.go:629] Waited for 177.295967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1207 20:13:51.813985   54749 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1207 20:13:51.814019   54749 node_conditions.go:123] node cpu capacity is 2
	I1207 20:13:51.814031   54749 node_conditions.go:105] duration metric: took 180.317636ms to run NodePressure ...
	I1207 20:13:51.814043   54749 start.go:228] waiting for startup goroutines ...
	I1207 20:13:51.814050   54749 start.go:233] waiting for cluster config update ...
	I1207 20:13:51.814059   54749 start.go:242] writing updated cluster config ...
	I1207 20:13:51.814341   54749 ssh_runner.go:195] Run: rm -f paused
	I1207 20:13:51.873816   54749 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1207 20:13:51.875895   54749 out.go:177] 
	W1207 20:13:51.877475   54749 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1207 20:13:51.879109   54749 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1207 20:13:51.880837   54749 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-362953" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Dec 07 20:12:32 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:12:32.151691271Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 07 20:12:32 ingress-addon-legacy-362953 systemd[1]: Started Docker Application Container Engine.
	Dec 07 20:12:32 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:12:32.151804583Z" level=info msg="API listen on [::]:2376"
	Dec 07 20:13:13 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:13:13.050711606Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 07 20:13:13 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:13:13.050731044Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 07 20:13:53 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:13:53.454409066Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Dec 07 20:13:55 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:13:55.079601272Z" level=info msg="ignoring event" container=9b744d960c6803aff2f8b9440e0925968c702cf62fd5feffd544aa374a3f1f8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:13:55 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:13:55.096385388Z" level=info msg="ignoring event" container=3996804547b55495346aeb69681b41d0f5e5633d16b218aaeb34cd2a10a1b490 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:13:55 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:13:55.657518981Z" level=info msg="ignoring event" container=6b3b30952f53864af91c93f4d6dddba0d6de628d651970fce0f1628dbd565988 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:13:55 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:13:55.665505639Z" level=info msg="ignoring event" container=d7012751e60b0c71e7b1ab39af30a82a2ffd7b47000ac071f822c55e8d1d892e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:13:57 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:13:57.042309237Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Dec 07 20:14:05 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:05.100798566Z" level=warning msg="Published ports are discarded when using host network mode"
	Dec 07 20:14:05 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:05.162988249Z" level=warning msg="Published ports are discarded when using host network mode"
	Dec 07 20:14:05 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:05.326398029Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Dec 07 20:14:11 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:11.598294457Z" level=info msg="ignoring event" container=4507d9d0810901e94cbde4e3c0fa61290aa2aab3e8e2e2a5c19cf2a2aba8c6f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:11 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:11.949329522Z" level=info msg="ignoring event" container=4bf5d6be6db82a38e795122bd6577d35a0b943aa4677dbe56787b4d86ee662cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:29 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:29.862215562Z" level=info msg="ignoring event" container=7325d53a0585ff012c3aa97eb440873dadf706d753bfd38a364a51df1c22a134 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:37 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:37.856560373Z" level=info msg="ignoring event" container=ddde14b7af413f6f0b1f125e700fc89b13ebfdb95656085aa9e8dabce4e40dc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:38 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:38.217471747Z" level=info msg="ignoring event" container=8d417618fbc4ac0221231c1fceab55286f2021aacc50b54e72214330a00d9c25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:50 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:50.763357232Z" level=info msg="ignoring event" container=5069930e53f5f508f5a8593cc7661122b4cd1eee553f9a1dbc2bc8870da2b3c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:52 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:14:52.880527565Z" level=info msg="ignoring event" container=8a87d6956d4e0d34be4bdbaf6b35a0d050483a23670df403942cc1bc2ce4b7a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:15:03 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:15:03.724130775Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=62328adbb92d333e1eaba6207149b61d4e5a6bfcb169e32500025fcc8d85177d
	Dec 07 20:15:03 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:15:03.739199787Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=62328adbb92d333e1eaba6207149b61d4e5a6bfcb169e32500025fcc8d85177d
	Dec 07 20:15:03 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:15:03.840444509Z" level=info msg="ignoring event" container=62328adbb92d333e1eaba6207149b61d4e5a6bfcb169e32500025fcc8d85177d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:15:03 ingress-addon-legacy-362953 dockerd[1300]: time="2023-12-07T20:15:03.911833312Z" level=info msg="ignoring event" container=f202359bcccc0097c623af0236a4552b482200f792c8b85eec70eb54a1f087ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8a87d6956d4e0       dd1b12fcb6097                                                                                                      17 seconds ago       Exited              hello-world-app           2                   adf0c14f13d6d       hello-world-app-5f5d8b66bb-5m94h
	71afe20f3a7b9       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                      42 seconds ago       Running             nginx                     0                   1f8f358805683       nginx
	62328adbb92d3       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   About a minute ago   Exited              controller                0                   f202359bcccc0       ingress-nginx-controller-7fcf777cb7-q7rl9
	3996804547b55       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   d7012751e60b0       ingress-nginx-admission-patch-h8lfx
	9b744d960c680       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   6b3b30952f538       ingress-nginx-admission-create-7jvh6
	af8a3783fff3c       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   193d62b8699a5       storage-provisioner
	1b09a9c769f94       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   8baa9a74e8d79       kube-proxy-qc8k9
	b55eb46784ad3       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   75199ae47ccac       coredns-66bff467f8-4t24t
	7c667030da0ba       ab707b0a0ea33                                                                                                      2 minutes ago        Running             etcd                      0                   1d77902a3c9d0       etcd-ingress-addon-legacy-362953
	5bdf4eaaea19c       095f37015706d                                                                                                      2 minutes ago        Running             kube-scheduler            0                   b108b78814cdb       kube-scheduler-ingress-addon-legacy-362953
	08179a8f0f182       2694cf044d665                                                                                                      2 minutes ago        Running             kube-apiserver            0                   2e09df51a4874       kube-apiserver-ingress-addon-legacy-362953
	54d3e5de0106c       68a4fac29a865                                                                                                      2 minutes ago        Running             kube-controller-manager   0                   c72b2ce2e893c       kube-controller-manager-ingress-addon-legacy-362953
	
	* 
	* ==> coredns [b55eb46784ad] <==
	* [INFO] 172.17.0.1:29596 - 49479 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069751s
	[INFO] 172.17.0.1:29596 - 21458 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069325s
	[INFO] 172.17.0.1:29596 - 47686 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003686927s
	[INFO] 172.17.0.1:26750 - 38390 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006103934s
	[INFO] 172.17.0.1:26750 - 6695 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000180552s
	[INFO] 172.17.0.1:29596 - 2832 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002969526s
	[INFO] 172.17.0.1:29596 - 46390 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075807s
	[INFO] 172.17.0.1:33034 - 1536 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100085s
	[INFO] 172.17.0.1:47425 - 33888 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033814s
	[INFO] 172.17.0.1:47425 - 51293 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00010299s
	[INFO] 172.17.0.1:33034 - 1632 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000029374s
	[INFO] 172.17.0.1:47425 - 31683 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068004s
	[INFO] 172.17.0.1:33034 - 62118 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026847s
	[INFO] 172.17.0.1:33034 - 43499 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076487s
	[INFO] 172.17.0.1:47425 - 4320 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057559s
	[INFO] 172.17.0.1:47425 - 16653 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041566s
	[INFO] 172.17.0.1:33034 - 11524 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026634s
	[INFO] 172.17.0.1:33034 - 58813 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038425s
	[INFO] 172.17.0.1:47425 - 16070 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000106756s
	[INFO] 172.17.0.1:47425 - 54635 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001235198s
	[INFO] 172.17.0.1:33034 - 49336 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000879427s
	[INFO] 172.17.0.1:47425 - 17301 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000993141s
	[INFO] 172.17.0.1:33034 - 1411 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001026839s
	[INFO] 172.17.0.1:47425 - 9383 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000041115s
	[INFO] 172.17.0.1:33034 - 18922 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061136s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-362953
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-362953
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=ingress-addon-legacy-362953
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T20_12_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:12:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-362953
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:15:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:15:01 +0000   Thu, 07 Dec 2023 20:12:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:15:01 +0000   Thu, 07 Dec 2023 20:12:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:15:01 +0000   Thu, 07 Dec 2023 20:12:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:15:01 +0000   Thu, 07 Dec 2023 20:13:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-362953
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022504Ki
	  pods:               110
	System Info:
	  Machine ID:                 3631eaf041ab4d8eb8b55cd720d8ebe0
	  System UUID:                f7815760-d949-438f-9bdc-1f617b0f0414
	  Boot ID:                    654d4215-4a80-4da6-8d0f-f014f59dffc2
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-5m94h                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-66bff467f8-4t24t                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     118s
	  kube-system                 etcd-ingress-addon-legacy-362953                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-apiserver-ingress-addon-legacy-362953             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-362953    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 kube-proxy-qc8k9                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-scheduler-ingress-addon-legacy-362953             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  2m23s (x4 over 2m23s)  kubelet     Node ingress-addon-legacy-362953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s (x4 over 2m23s)  kubelet     Node ingress-addon-legacy-362953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s (x3 over 2m23s)  kubelet     Node ingress-addon-legacy-362953 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s                   kubelet     Node ingress-addon-legacy-362953 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s                   kubelet     Node ingress-addon-legacy-362953 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s                   kubelet     Node ingress-addon-legacy-362953 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                119s                   kubelet     Node ingress-addon-legacy-362953 status is now: NodeReady
	  Normal  Starting                 116s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000746] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000979] FS-Cache: N-cookie d=0000000090f41b20{9p.inode} n=00000000dc06c1cd
	[  +0.001093] FS-Cache: N-key=[8] '95cfc90000000000'
	[Dec 7 20:11] FS-Cache: Duplicate cookie detected
	[  +0.000769] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000999] FS-Cache: O-cookie d=0000000090f41b20{9p.inode} n=00000000cac5833d
	[  +0.001095] FS-Cache: O-key=[8] '94cfc90000000000'
	[  +0.000734] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001021] FS-Cache: N-cookie d=0000000090f41b20{9p.inode} n=000000009749928f
	[  +0.001098] FS-Cache: N-key=[8] '94cfc90000000000'
	[  +0.400519] FS-Cache: Duplicate cookie detected
	[  +0.000763] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001060] FS-Cache: O-cookie d=0000000090f41b20{9p.inode} n=00000000b90ae50d
	[  +0.001107] FS-Cache: O-key=[8] '9ccfc90000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=0000000090f41b20{9p.inode} n=0000000050e0b035
	[  +0.001129] FS-Cache: N-key=[8] '9ccfc90000000000'
	[  +4.302135] FS-Cache: Duplicate cookie detected
	[  +0.000780] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001153] FS-Cache: O-cookie d=00000000a7d053ca{9P.session} n=0000000054524ae3
	[  +0.001245] FS-Cache: O-key=[10] '34323935363934383134'
	[  +0.000826] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001013] FS-Cache: N-cookie d=00000000a7d053ca{9P.session} n=000000007668a989
	[  +0.001156] FS-Cache: N-key=[10] '34323935363934383134'
	[Dec 7 20:12] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [7c667030da0b] <==
	* raft2023/12/07 20:12:49 INFO: aec36adc501070cc became follower at term 0
	raft2023/12/07 20:12:49 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/07 20:12:49 INFO: aec36adc501070cc became follower at term 1
	raft2023/12/07 20:12:49 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-07 20:12:49.445037 W | auth: simple token is not cryptographically signed
	2023-12-07 20:12:49.447597 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-07 20:12:49.451082 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/12/07 20:12:49 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-12-07 20:12:49.452139 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-07 20:12:49.452286 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-07 20:12:49.452393 I | embed: listening for peers on 192.168.49.2:2380
	2023-12-07 20:12:49.452525 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/12/07 20:12:49 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/12/07 20:12:49 INFO: aec36adc501070cc became candidate at term 2
	raft2023/12/07 20:12:49 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/12/07 20:12:49 INFO: aec36adc501070cc became leader at term 2
	raft2023/12/07 20:12:49 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-12-07 20:12:49.742006 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-07 20:12:49.742548 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-07 20:12:49.742695 I | etcdserver/api: enabled capabilities for version 3.4
	2023-12-07 20:12:49.742792 I | etcdserver: published {Name:ingress-addon-legacy-362953 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-12-07 20:12:49.742994 I | embed: ready to serve client requests
	2023-12-07 20:12:49.744413 I | embed: serving client requests on 192.168.49.2:2379
	2023-12-07 20:12:49.748268 I | embed: ready to serve client requests
	2023-12-07 20:12:49.751597 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  20:15:09 up 57 min,  0 users,  load average: 1.33, 1.57, 1.09
	Linux ingress-addon-legacy-362953 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [08179a8f0f18] <==
	* E1207 20:12:53.888141       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1207 20:12:54.007314       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1207 20:12:54.008785       1 cache.go:39] Caches are synced for autoregister controller
	I1207 20:12:54.011612       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1207 20:12:54.011853       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 20:12:54.011986       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1207 20:12:54.804423       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1207 20:12:54.804450       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1207 20:12:54.813982       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1207 20:12:54.818663       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1207 20:12:54.818884       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1207 20:12:55.266133       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 20:12:55.315616       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1207 20:12:55.418356       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1207 20:12:55.419437       1 controller.go:609] quota admission added evaluator for: endpoints
	I1207 20:12:55.423183       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 20:12:56.249922       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1207 20:12:57.074197       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1207 20:12:57.155071       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1207 20:13:00.647876       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 20:13:11.677712       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1207 20:13:12.311317       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1207 20:13:52.783071       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1207 20:14:17.714026       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1207 20:15:01.719029       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [54d3e5de0106] <==
	* E1207 20:13:11.813888       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E1207 20:13:11.846577       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1207 20:13:11.921894       1 shared_informer.go:230] Caches are synced for disruption 
	I1207 20:13:11.921921       1 disruption.go:339] Sending events to api server.
	I1207 20:13:12.031053       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1207 20:13:12.045534       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"697d8aa6-e504-4795-828e-eb78056bfca0", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1207 20:13:12.058323       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"f9a8ae76-578f-4782-9280-53f64587a9a6", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-kr85q
	I1207 20:13:12.058391       1 shared_informer.go:230] Caches are synced for attach detach 
	I1207 20:13:12.094368       1 shared_informer.go:230] Caches are synced for HPA 
	I1207 20:13:12.243322       1 shared_informer.go:230] Caches are synced for stateful set 
	I1207 20:13:12.246903       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1207 20:13:12.283333       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1207 20:13:12.283354       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1207 20:13:12.305372       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1207 20:13:12.321400       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"462ff0b5-7d24-4994-a1a8-ac24cb244de7", APIVersion:"apps/v1", ResourceVersion:"220", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-qc8k9
	I1207 20:13:12.327001       1 shared_informer.go:230] Caches are synced for resource quota 
	I1207 20:13:12.332063       1 shared_informer.go:230] Caches are synced for resource quota 
	I1207 20:13:52.763685       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"65a99d1b-b9d8-4ee3-a8ec-06293536cd2c", APIVersion:"apps/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1207 20:13:52.774834       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"faa65b6c-5a35-43e6-b4cc-03d287afebf0", APIVersion:"apps/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-q7rl9
	I1207 20:13:52.813891       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"87c066ab-abf9-4919-b352-2c6e9022a02a", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-7jvh6
	I1207 20:13:52.893112       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d680e68c-aa42-4997-8fdf-b398128ee8d8", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-h8lfx
	I1207 20:13:55.591198       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"87c066ab-abf9-4919-b352-2c6e9022a02a", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1207 20:13:55.620312       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d680e68c-aa42-4997-8fdf-b398128ee8d8", APIVersion:"batch/v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1207 20:14:34.492955       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"9382e05a-0d70-434a-adb2-05b1f25f4482", APIVersion:"apps/v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1207 20:14:34.521846       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"cf30ecdd-b91c-4260-bc60-25207d28136d", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-5m94h
	
	* 
	* ==> kube-proxy [1b09a9c769f9] <==
	* W1207 20:13:13.440646       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1207 20:13:13.451891       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1207 20:13:13.451941       1 server_others.go:186] Using iptables Proxier.
	I1207 20:13:13.452418       1 server.go:583] Version: v1.18.20
	I1207 20:13:13.455566       1 config.go:315] Starting service config controller
	I1207 20:13:13.455650       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1207 20:13:13.456071       1 config.go:133] Starting endpoints config controller
	I1207 20:13:13.456142       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1207 20:13:13.556044       1 shared_informer.go:230] Caches are synced for service config 
	I1207 20:13:13.556618       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [5bdf4eaaea19] <==
	* W1207 20:12:53.977765       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 20:12:54.022480       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1207 20:12:54.022732       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1207 20:12:54.024983       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1207 20:12:54.025219       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1207 20:12:54.025465       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:12:54.032341       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1207 20:12:54.031371       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 20:12:54.031446       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 20:12:54.031510       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 20:12:54.031583       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 20:12:54.031648       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:12:54.031731       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 20:12:54.031793       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 20:12:54.031852       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 20:12:54.031912       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 20:12:54.031972       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 20:12:54.032035       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:12:54.037184       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:12:54.883115       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:12:55.018086       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 20:12:55.071253       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 20:12:55.084300       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1207 20:12:56.732984       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1207 20:13:11.811253       1 factory.go:503] pod: kube-system/coredns-66bff467f8-kr85q is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Dec 07 20:14:40 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:14:40.725303    2875 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7325d53a0585ff012c3aa97eb440873dadf706d753bfd38a364a51df1c22a134
	Dec 07 20:14:40 ingress-addon-legacy-362953 kubelet[2875]: E1207 20:14:40.725637    2875 pod_workers.go:191] Error syncing pod f6e5f6b2-543e-4298-aa71-926b76cc2bb5 ("kube-ingress-dns-minikube_kube-system(f6e5f6b2-543e-4298-aa71-926b76cc2bb5)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f6e5f6b2-543e-4298-aa71-926b76cc2bb5)"
	Dec 07 20:14:50 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:14:50.523937    2875 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-g9h6l" (UniqueName: "kubernetes.io/secret/f6e5f6b2-543e-4298-aa71-926b76cc2bb5-minikube-ingress-dns-token-g9h6l") pod "f6e5f6b2-543e-4298-aa71-926b76cc2bb5" (UID: "f6e5f6b2-543e-4298-aa71-926b76cc2bb5")
	Dec 07 20:14:50 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:14:50.528300    2875 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6e5f6b2-543e-4298-aa71-926b76cc2bb5-minikube-ingress-dns-token-g9h6l" (OuterVolumeSpecName: "minikube-ingress-dns-token-g9h6l") pod "f6e5f6b2-543e-4298-aa71-926b76cc2bb5" (UID: "f6e5f6b2-543e-4298-aa71-926b76cc2bb5"). InnerVolumeSpecName "minikube-ingress-dns-token-g9h6l". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:14:50 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:14:50.624267    2875 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-g9h6l" (UniqueName: "kubernetes.io/secret/f6e5f6b2-543e-4298-aa71-926b76cc2bb5-minikube-ingress-dns-token-g9h6l") on node "ingress-addon-legacy-362953" DevicePath ""
	Dec 07 20:14:51 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:14:51.185671    2875 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7325d53a0585ff012c3aa97eb440873dadf706d753bfd38a364a51df1c22a134
	Dec 07 20:14:52 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:14:52.729031    2875 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8d417618fbc4ac0221231c1fceab55286f2021aacc50b54e72214330a00d9c25
	Dec 07 20:14:52 ingress-addon-legacy-362953 kubelet[2875]: W1207 20:14:52.910153    2875 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod0926d4c6-89c8-4723-a217-3e2a6d3c67a3/8a87d6956d4e0d34be4bdbaf6b35a0d050483a23670df403942cc1bc2ce4b7a9": none of the resources are being tracked.
	Dec 07 20:14:53 ingress-addon-legacy-362953 kubelet[2875]: W1207 20:14:53.204765    2875 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-5m94h through plugin: invalid network status for
	Dec 07 20:14:53 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:14:53.209971    2875 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8d417618fbc4ac0221231c1fceab55286f2021aacc50b54e72214330a00d9c25
	Dec 07 20:14:53 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:14:53.210300    2875 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8a87d6956d4e0d34be4bdbaf6b35a0d050483a23670df403942cc1bc2ce4b7a9
	Dec 07 20:14:53 ingress-addon-legacy-362953 kubelet[2875]: E1207 20:14:53.212784    2875 pod_workers.go:191] Error syncing pod 0926d4c6-89c8-4723-a217-3e2a6d3c67a3 ("hello-world-app-5f5d8b66bb-5m94h_default(0926d4c6-89c8-4723-a217-3e2a6d3c67a3)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-5m94h_default(0926d4c6-89c8-4723-a217-3e2a6d3c67a3)"
	Dec 07 20:14:54 ingress-addon-legacy-362953 kubelet[2875]: W1207 20:14:54.219458    2875 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-5m94h through plugin: invalid network status for
	Dec 07 20:15:01 ingress-addon-legacy-362953 kubelet[2875]: E1207 20:15:01.699132    2875 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-q7rl9.179ea5e29071ff03", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-q7rl9", UID:"1a459d67-b054-4c40-8207-a3343aafd435", APIVersion:"v1", ResourceVersion:"476", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-362953"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc154a7d1698e8d03, ext:124708688453, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc154a7d1698e8d03, ext:124708688453, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-q7rl9.179ea5e29071ff03" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 07 20:15:01 ingress-addon-legacy-362953 kubelet[2875]: E1207 20:15:01.714149    2875 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-q7rl9.179ea5e29071ff03", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-q7rl9", UID:"1a459d67-b054-4c40-8207-a3343aafd435", APIVersion:"v1", ResourceVersion:"476", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-362953"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc154a7d1698e8d03, ext:124708688453, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc154a7d16a06535b, ext:124716538021, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-q7rl9.179ea5e29071ff03" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 07 20:15:04 ingress-addon-legacy-362953 kubelet[2875]: W1207 20:15:04.333378    2875 pod_container_deletor.go:77] Container "f202359bcccc0097c623af0236a4552b482200f792c8b85eec70eb54a1f087ba" not found in pod's containers
	Dec 07 20:15:05 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:15:05.725067    2875 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 8a87d6956d4e0d34be4bdbaf6b35a0d050483a23670df403942cc1bc2ce4b7a9
	Dec 07 20:15:05 ingress-addon-legacy-362953 kubelet[2875]: E1207 20:15:05.725378    2875 pod_workers.go:191] Error syncing pod 0926d4c6-89c8-4723-a217-3e2a6d3c67a3 ("hello-world-app-5f5d8b66bb-5m94h_default(0926d4c6-89c8-4723-a217-3e2a6d3c67a3)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-5m94h_default(0926d4c6-89c8-4723-a217-3e2a6d3c67a3)"
	Dec 07 20:15:05 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:15:05.763617    2875 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-jd6k9" (UniqueName: "kubernetes.io/secret/1a459d67-b054-4c40-8207-a3343aafd435-ingress-nginx-token-jd6k9") pod "1a459d67-b054-4c40-8207-a3343aafd435" (UID: "1a459d67-b054-4c40-8207-a3343aafd435")
	Dec 07 20:15:05 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:15:05.763686    2875 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1a459d67-b054-4c40-8207-a3343aafd435-webhook-cert") pod "1a459d67-b054-4c40-8207-a3343aafd435" (UID: "1a459d67-b054-4c40-8207-a3343aafd435")
	Dec 07 20:15:05 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:15:05.768931    2875 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a459d67-b054-4c40-8207-a3343aafd435-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1a459d67-b054-4c40-8207-a3343aafd435" (UID: "1a459d67-b054-4c40-8207-a3343aafd435"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:15:05 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:15:05.773439    2875 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a459d67-b054-4c40-8207-a3343aafd435-ingress-nginx-token-jd6k9" (OuterVolumeSpecName: "ingress-nginx-token-jd6k9") pod "1a459d67-b054-4c40-8207-a3343aafd435" (UID: "1a459d67-b054-4c40-8207-a3343aafd435"). InnerVolumeSpecName "ingress-nginx-token-jd6k9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:15:05 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:15:05.863978    2875 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1a459d67-b054-4c40-8207-a3343aafd435-webhook-cert") on node "ingress-addon-legacy-362953" DevicePath ""
	Dec 07 20:15:05 ingress-addon-legacy-362953 kubelet[2875]: I1207 20:15:05.864026    2875 reconciler.go:319] Volume detached for volume "ingress-nginx-token-jd6k9" (UniqueName: "kubernetes.io/secret/1a459d67-b054-4c40-8207-a3343aafd435-ingress-nginx-token-jd6k9") on node "ingress-addon-legacy-362953" DevicePath ""
	Dec 07 20:15:06 ingress-addon-legacy-362953 kubelet[2875]: W1207 20:15:06.734503    2875 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/1a459d67-b054-4c40-8207-a3343aafd435/volumes" does not exist
	
	* 
	* ==> storage-provisioner [af8a3783fff3] <==
	* I1207 20:13:15.330966       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 20:13:15.343196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 20:13:15.343473       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 20:13:15.350649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 20:13:15.350926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-362953_6ecaccef-5553-478a-ba04-0e27523b9306!
	I1207 20:13:15.351973       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b402c6a-6818-4e0d-90c2-d623eb3b5565", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-362953_6ecaccef-5553-478a-ba04-0e27523b9306 became leader
	I1207 20:13:15.452121       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-362953_6ecaccef-5553-478a-ba04-0e27523b9306!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-362953 -n ingress-addon-legacy-362953
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-362953 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (65.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (447.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3846134833.exe start -p stopped-upgrade-187904 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1207 20:40:21.244553    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:40:45.682097    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:45.687372    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:45.697615    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:45.717869    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:45.758140    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:45.838381    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:45.999114    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.3846134833.exe start -p stopped-upgrade-187904 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (49.098879069s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-187904] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2674945823
	* Using the docker driver based on user configuration
	* minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Starting control plane node stopped-upgrade-187904 in cluster stopped-upgrade-187904
	* Pulling base image ...
	* Downloading Kubernetes v1.20.2 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW[
K- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
[K\ WW| WW/ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 1005.68 KiB / 514.92 MiB [] 0.19% ? p/s ?    > preloaded-images-k8s-v8-v1....: 2.45 MiB / 514.92 MiB [>__] 0.48% ? p/s ?    > preloaded-images-k8s-v8-v1....: 3.94 MiB / 514.92 MiB [>__] 0.76% ? p/s ?    > preloaded-images-k8s-v8-v1....: 5.47 MiB / 514.92 MiB  1.06% 7.48 MiB p/s    > preloaded-images-k8s-v8-v1....: 7.00 MiB / 514.92 MiB  1.36% 7.48 MiB p/s    > preloaded-images-k8s-v8-v1....: 8.53 MiB / 514.92 MiB  1.66% 7.48 MiB p/s    > preloaded-images-k8s-v8-v1....: 11.55 MiB / 514.92 MiB  2.24% 7.65 MiB p/    > preloaded-images-k8s-v8-v1....: 20.46 MiB / 514.92 MiB  3.97% 7.65 MiB p/    > preloaded-images-k8s-v8-v1....: 26.04 MiB / 514.92 MiB  5.06% 7.65 MiB p/    > preloaded-images-k8s-v8-v1....: 29.98 MiB / 514.92 MiB  5.82% 9.14 MiB p/    > preloaded-images-k8s-v8-v1....: 34.41 MiB / 514.92 MiB  6.68% 9.14 MiB p/    > preloaded-images-k8s-v8-v1....: 41.47 MiB / 514.92 MiB  8.05% 9.14 MiB p/    > preloaded-images-k8s-v8-v1....: 43.03 MiB / 514.92 MiB  8.36%
9.95 MiB p/    > preloaded-images-k8s-v8-v1....: 49.91 MiB / 514.92 MiB  9.69% 9.95 MiB p/    > preloaded-images-k8s-v8-v1....: 52.73 MiB / 514.92 MiB  10.24% 9.95 MiB p    > preloaded-images-k8s-v8-v1....: 57.13 MiB / 514.92 MiB  11.10% 10.82 MiB     > preloaded-images-k8s-v8-v1....: 60.42 MiB / 514.92 MiB  11.73% 10.82 MiB     > preloaded-images-k8s-v8-v1....: 65.46 MiB / 514.92 MiB  12.71% 10.82 MiB     > preloaded-images-k8s-v8-v1....: 69.80 MiB / 514.92 MiB  13.55% 11.49 MiB     > preloaded-images-k8s-v8-v1....: 75.67 MiB / 514.92 MiB  14.69% 11.49 MiB     > preloaded-images-k8s-v8-v1....: 81.70 MiB / 514.92 MiB  15.87% 11.49 MiB     > preloaded-images-k8s-v8-v1....: 84.91 MiB / 514.92 MiB  16.49% 12.37 MiB     > preloaded-images-k8s-v8-v1....: 90.17 MiB / 514.92 MiB  17.51% 12.37 MiB     > preloaded-images-k8s-v8-v1....: 96.00 MiB / 514.92 MiB  18.64% 12.37 MiB     > preloaded-images-k8s-v8-v1....: 100.07 MiB / 514.92 MiB  19.43% 13.20 MiB    > preloaded-images-k8s-v8-v1....: 104.00 MiB / 514.92 MiB  2
0.20% 13.20 MiB    > preloaded-images-k8s-v8-v1....: 107.88 MiB / 514.92 MiB  20.95% 13.20 MiB    > preloaded-images-k8s-v8-v1....: 112.00 MiB / 514.92 MiB  21.75% 13.63 MiB    > preloaded-images-k8s-v8-v1....: 112.95 MiB / 514.92 MiB  21.94% 13.63 MiB    > preloaded-images-k8s-v8-v1....: 120.23 MiB / 514.92 MiB  23.35% 13.63 MiB    > preloaded-images-k8s-v8-v1....: 128.00 MiB / 514.92 MiB  24.86% 14.48 MiB    > preloaded-images-k8s-v8-v1....: 131.30 MiB / 514.92 MiB  25.50% 14.48 MiB    > preloaded-images-k8s-v8-v1....: 136.00 MiB / 514.92 MiB  26.41% 14.48 MiB    > preloaded-images-k8s-v8-v1....: 143.40 MiB / 514.92 MiB  27.85% 15.20 MiB    > preloaded-images-k8s-v8-v1....: 144.00 MiB / 514.92 MiB  27.97% 15.20 MiB    > preloaded-images-k8s-v8-v1....: 150.26 MiB / 514.92 MiB  29.18% 15.20 MiB    > preloaded-images-k8s-v8-v1....: 152.00 MiB / 514.92 MiB  29.52% 15.14 MiB    > preloaded-images-k8s-v8-v1....: 159.84 MiB / 514.92 MiB  31.04% 15.14 MiB    > preloaded-images-k8s-v8-v1....: 160.00 MiB / 514.92 MiB
31.07% 15.14 MiB    > preloaded-images-k8s-v8-v1....: 160.03 MiB / 514.92 MiB  31.08% 15.03 MiB    > preloaded-images-k8s-v8-v1....: 166.60 MiB / 514.92 MiB  32.36% 15.03 MiB    > preloaded-images-k8s-v8-v1....: 169.88 MiB / 514.92 MiB  32.99% 15.03 MiB    > preloaded-images-k8s-v8-v1....: 176.00 MiB / 514.92 MiB  34.18% 15.78 MiB    > preloaded-images-k8s-v8-v1....: 179.21 MiB / 514.92 MiB  34.80% 15.78 MiB    > preloaded-images-k8s-v8-v1....: 184.00 MiB / 514.92 MiB  35.73% 15.78 MiB    > preloaded-images-k8s-v8-v1....: 184.00 MiB / 514.92 MiB  35.73% 15.62 MiB    > preloaded-images-k8s-v8-v1....: 184.27 MiB / 514.92 MiB  35.79% 15.62 MiB    > preloaded-images-k8s-v8-v1....: 192.00 MiB / 514.92 MiB  37.29% 15.62 MiB    > preloaded-images-k8s-v8-v1....: 197.29 MiB / 514.92 MiB  38.31% 16.04 MiB    > preloaded-images-k8s-v8-v1....: 200.83 MiB / 514.92 MiB  39.00% 16.04 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 514.92 MiB  40.39% 16.04 MiB    > preloaded-images-k8s-v8-v1....: 211.61 MiB / 514.92
MiB  41.10% 16.55 MiB    > preloaded-images-k8s-v8-v1....: 216.00 MiB / 514.92 MiB  41.95% 16.55 MiB    > preloaded-images-k8s-v8-v1....: 224.00 MiB / 514.92 MiB  43.50% 16.55 MiB    > preloaded-images-k8s-v8-v1....: 224.27 MiB / 514.92 MiB  43.55% 16.84 MiB    > preloaded-images-k8s-v8-v1....: 232.00 MiB / 514.92 MiB  45.06% 16.84 MiB    > preloaded-images-k8s-v8-v1....: 238.58 MiB / 514.92 MiB  46.33% 16.84 MiB    > preloaded-images-k8s-v8-v1....: 240.72 MiB / 514.92 MiB  46.75% 17.52 MiB    > preloaded-images-k8s-v8-v1....: 248.00 MiB / 514.92 MiB  48.16% 17.52 MiB    > preloaded-images-k8s-v8-v1....: 256.00 MiB / 514.92 MiB  49.72% 17.52 MiB    > preloaded-images-k8s-v8-v1....: 256.00 MiB / 514.92 MiB  49.72% 18.03 MiB    > preloaded-images-k8s-v8-v1....: 259.50 MiB / 514.92 MiB  50.40% 18.03 MiB    > preloaded-images-k8s-v8-v1....: 264.00 MiB / 514.92 MiB  51.27% 18.03 MiB    > preloaded-images-k8s-v8-v1....: 272.00 MiB / 514.92 MiB  52.82% 18.59 MiB    > preloaded-images-k8s-v8-v1....: 272.78 MiB / 514.
92 MiB  52.98% 18.59 MiB    > preloaded-images-k8s-v8-v1....: 280.00 MiB / 514.92 MiB  54.38% 18.59 MiB    > preloaded-images-k8s-v8-v1....: 288.00 MiB / 514.92 MiB  55.93% 19.11 MiB    > preloaded-images-k8s-v8-v1....: 291.11 MiB / 514.92 MiB  56.54% 19.11 MiB    > preloaded-images-k8s-v8-v1....: 296.00 MiB / 514.92 MiB  57.48% 19.11 MiB    > preloaded-images-k8s-v8-v1....: 297.70 MiB / 514.92 MiB  57.82% 18.92 MiB    > preloaded-images-k8s-v8-v1....: 304.00 MiB / 514.92 MiB  59.04% 18.92 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 514.92 MiB  60.59% 18.92 MiB    > preloaded-images-k8s-v8-v1....: 317.43 MiB / 514.92 MiB  61.65% 19.82 MiB    > preloaded-images-k8s-v8-v1....: 320.00 MiB / 514.92 MiB  62.15% 19.82 MiB    > preloaded-images-k8s-v8-v1....: 326.53 MiB / 514.92 MiB  63.41% 19.82 MiB    > preloaded-images-k8s-v8-v1....: 331.27 MiB / 514.92 MiB  64.33% 20.03 MiB    > preloaded-images-k8s-v8-v1....: 336.00 MiB / 514.92 MiB  65.25% 20.03 MiB    > preloaded-images-k8s-v8-v1....: 339.63 MiB / 5
14.92 MiB  65.96% 20.03 MiB    > preloaded-images-k8s-v8-v1....: 347.22 MiB / 514.92 MiB  67.43% 20.46 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 514.92 MiB  68.36% 20.46 MiB    > preloaded-images-k8s-v8-v1....: 352.00 MiB / 514.92 MiB  68.36% 20.46 MiB    > preloaded-images-k8s-v8-v1....: 354.70 MiB / 514.92 MiB  68.88% 19.94 MiB    > preloaded-images-k8s-v8-v1....: 360.00 MiB / 514.92 MiB  69.91% 19.94 MiB    > preloaded-images-k8s-v8-v1....: 366.19 MiB / 514.92 MiB  71.12% 19.94 MiB    > preloaded-images-k8s-v8-v1....: 368.00 MiB / 514.92 MiB  71.47% 20.08 MiB    > preloaded-images-k8s-v8-v1....: 370.75 MiB / 514.92 MiB  72.00% 20.08 MiB    > preloaded-images-k8s-v8-v1....: 376.00 MiB / 514.92 MiB  73.02% 20.08 MiB    > preloaded-images-k8s-v8-v1....: 384.00 MiB / 514.92 MiB  74.57% 20.51 MiB    > preloaded-images-k8s-v8-v1....: 386.36 MiB / 514.92 MiB  75.03% 20.51 MiB    > preloaded-images-k8s-v8-v1....: 392.00 MiB / 514.92 MiB  76.13% 20.51 MiB    > preloaded-images-k8s-v8-v1....: 399.41 MiB
/ 514.92 MiB  77.57% 20.84 MiB    > preloaded-images-k8s-v8-v1....: 404.32 MiB / 514.92 MiB  78.52% 20.84 MiB    > preloaded-images-k8s-v8-v1....: 412.41 MiB / 514.92 MiB  80.09% 20.84 MiB    > preloaded-images-k8s-v8-v1....: 416.61 MiB / 514.92 MiB  80.91% 21.35 MiB    > preloaded-images-k8s-v8-v1....: 424.00 MiB / 514.92 MiB  82.34% 21.35 MiB    > preloaded-images-k8s-v8-v1....: 432.64 MiB / 514.92 MiB  84.02% 21.35 MiB    > preloaded-images-k8s-v8-v1....: 440.00 MiB / 514.92 MiB  85.45% 22.48 MiB    > preloaded-images-k8s-v8-v1....: 443.99 MiB / 514.92 MiB  86.22% 22.48 MiB    > preloaded-images-k8s-v8-v1....: 448.48 MiB / 514.92 MiB  87.10% 22.48 MiB    > preloaded-images-k8s-v8-v1....: 456.00 MiB / 514.92 MiB  88.56% 22.75 MiB    > preloaded-images-k8s-v8-v1....: 461.33 MiB / 514.92 MiB  89.59% 22.75 MiB    > preloaded-images-k8s-v8-v1....: 470.61 MiB / 514.92 MiB  91.40% 22.75 MiB    > preloaded-images-k8s-v8-v1....: 472.00 MiB / 514.92 MiB  91.66% 23.01 MiB    > preloaded-images-k8s-v8-v1....: 480.00 M
iB / 514.92 MiB  93.22% 23.01 MiB    > preloaded-images-k8s-v8-v1....: 485.22 MiB / 514.92 MiB  94.23% 23.01 MiB    > preloaded-images-k8s-v8-v1....: 488.00 MiB / 514.92 MiB  94.77% 23.24 MiB    > preloaded-images-k8s-v8-v1....: 491.38 MiB / 514.92 MiB  95.43% 23.24 MiB    > preloaded-images-k8s-v8-v1....: 498.66 MiB / 514.92 MiB  96.84% 23.24 MiB    > preloaded-images-k8s-v8-v1....: 504.32 MiB / 514.92 MiB  97.94% 23.50 MiB    > preloaded-images-k8s-v8-v1....: 514.92 MiB / 514.92 MiB  100.00% 23.63 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
E1207 20:40:46.319840    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:46.960762    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3846134833.exe start -p stopped-upgrade-187904 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1207 20:40:48.241111    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:50.802672    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:40:55.923061    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:41:06.163605    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:41:26.643933    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:42:07.604976    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 20:42:19.176958    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.3846134833.exe start -p stopped-upgrade-187904 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (3m23.45727892s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-187904] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig3013185267
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-187904 in cluster stopped-upgrade-187904
	* Pulling base image ...
	* docker "stopped-upgrade-187904" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3846134833.exe start -p stopped-upgrade-187904 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1207 20:44:16.130542    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.3846134833.exe start -p stopped-upgrade-187904 --memory=2200 --vm-driver=docker  --container-runtime=docker: exit status 80 (3m12.124546309s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-187904] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig688480705
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-187904 in cluster stopped-upgrade-187904
	* docker "stopped-upgrade-187904" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:202: legacy v1.17.0 start failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (447.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-187904
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p stopped-upgrade-187904: exit status 85 (250.743436ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-843941                               | NoKubernetes-843941       | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC | 07 Dec 23 20:37 UTC |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-843941 sudo                          | NoKubernetes-843941       | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-843941                               | NoKubernetes-843941       | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC | 07 Dec 23 20:37 UTC |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo crictl                         | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo crictl                         | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo find                           | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo ip a s                         | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	| ssh     | -p cilium-590458 sudo ip r s                         | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | iptables-save                                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo iptables                       | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | -t nat -L -n -v                                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo docker                         | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo cat                            | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo                                | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo find                           | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-590458 sudo crio                           | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-590458                                     | cilium-590458             | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC | 07 Dec 23 20:37 UTC |
	| start   | -p force-systemd-env-286550                          | force-systemd-env-286550  | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC | 07 Dec 23 20:38 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| delete  | -p offline-docker-242133                             | offline-docker-242133     | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC | 07 Dec 23 20:37 UTC |
	| start   | -p force-systemd-flag-968747                         | force-systemd-flag-968747 | jenkins | v1.32.0 | 07 Dec 23 20:37 UTC | 07 Dec 23 20:38 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-286550                             | force-systemd-env-286550  | jenkins | v1.32.0 | 07 Dec 23 20:38 UTC | 07 Dec 23 20:38 UTC |
	|         | ssh docker info --format                             |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-286550                          | force-systemd-env-286550  | jenkins | v1.32.0 | 07 Dec 23 20:38 UTC | 07 Dec 23 20:38 UTC |
	| start   | -p docker-flags-996478                               | docker-flags-996478       | jenkins | v1.32.0 | 07 Dec 23 20:38 UTC | 07 Dec 23 20:39 UTC |
	|         | --cache-images=false                                 |                           |         |         |                     |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=false                                         |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                                 |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                                 |                           |         |         |                     |                     |
	|         | --docker-opt=debug                                   |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                                |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-968747                            | force-systemd-flag-968747 | jenkins | v1.32.0 | 07 Dec 23 20:38 UTC | 07 Dec 23 20:38 UTC |
	|         | ssh docker info --format                             |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-968747                         | force-systemd-flag-968747 | jenkins | v1.32.0 | 07 Dec 23 20:38 UTC | 07 Dec 23 20:38 UTC |
	| start   | -p cert-expiration-635698                            | cert-expiration-635698    | jenkins | v1.32.0 | 07 Dec 23 20:38 UTC | 07 Dec 23 20:39 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | docker-flags-996478 ssh                              | docker-flags-996478       | jenkins | v1.32.0 | 07 Dec 23 20:39 UTC | 07 Dec 23 20:39 UTC |
	|         | sudo systemctl show docker                           |                           |         |         |                     |                     |
	|         | --property=Environment                               |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | docker-flags-996478 ssh                              | docker-flags-996478       | jenkins | v1.32.0 | 07 Dec 23 20:39 UTC | 07 Dec 23 20:39 UTC |
	|         | sudo systemctl show docker                           |                           |         |         |                     |                     |
	|         | --property=ExecStart                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| delete  | -p docker-flags-996478                               | docker-flags-996478       | jenkins | v1.32.0 | 07 Dec 23 20:39 UTC | 07 Dec 23 20:39 UTC |
	| start   | -p cert-options-977678                               | cert-options-977678       | jenkins | v1.32.0 | 07 Dec 23 20:39 UTC | 07 Dec 23 20:39 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| ssh     | cert-options-977678 ssh                              | cert-options-977678       | jenkins | v1.32.0 | 07 Dec 23 20:39 UTC | 07 Dec 23 20:39 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-977678 -- sudo                       | cert-options-977678       | jenkins | v1.32.0 | 07 Dec 23 20:39 UTC | 07 Dec 23 20:39 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-977678                               | cert-options-977678       | jenkins | v1.32.0 | 07 Dec 23 20:39 UTC | 07 Dec 23 20:39 UTC |
	| start   | -p cert-expiration-635698                            | cert-expiration-635698    | jenkins | v1.32.0 | 07 Dec 23 20:42 UTC | 07 Dec 23 20:42 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                              |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-635698                            | cert-expiration-635698    | jenkins | v1.32.0 | 07 Dec 23 20:42 UTC | 07 Dec 23 20:42 UTC |
	| start   | -p missing-upgrade-883002                            | missing-upgrade-883002    | jenkins | v1.32.0 | 07 Dec 23 20:43 UTC | 07 Dec 23 20:44 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-883002                            | missing-upgrade-883002    | jenkins | v1.32.0 | 07 Dec 23 20:44 UTC | 07 Dec 23 20:44 UTC |
	| start   | -p kubernetes-upgrade-771944                         | kubernetes-upgrade-771944 | jenkins | v1.32.0 | 07 Dec 23 20:44 UTC | 07 Dec 23 20:45 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-771944                         | kubernetes-upgrade-771944 | jenkins | v1.32.0 | 07 Dec 23 20:45 UTC | 07 Dec 23 20:45 UTC |
	| start   | -p kubernetes-upgrade-771944                         | kubernetes-upgrade-771944 | jenkins | v1.32.0 | 07 Dec 23 20:45 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                    |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=docker                           |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:45:53
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:45:53.828106  218292 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:45:53.828375  218292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:45:53.828401  218292 out.go:309] Setting ErrFile to fd 2...
	I1207 20:45:53.828422  218292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:45:53.828726  218292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	I1207 20:45:53.829163  218292 out.go:303] Setting JSON to false
	I1207 20:45:53.830186  218292 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":5297,"bootTime":1701976657,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1207 20:45:53.830284  218292 start.go:138] virtualization:  
	I1207 20:45:53.832675  218292 out.go:177] * [kubernetes-upgrade-771944] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1207 20:45:53.834571  218292 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:45:53.834654  218292 notify.go:220] Checking for updates...
	I1207 20:45:53.838105  218292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:45:53.839896  218292 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:45:53.841653  218292 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	I1207 20:45:53.843518  218292 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1207 20:45:53.845089  218292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:45:53.847529  218292 config.go:182] Loaded profile config "kubernetes-upgrade-771944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1207 20:45:53.848096  218292 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:45:53.873004  218292 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1207 20:45:53.873111  218292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:45:53.957472  218292 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-07 20:45:53.945986925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:45:53.957585  218292 docker.go:295] overlay module found
	I1207 20:45:53.961058  218292 out.go:177] * Using the docker driver based on existing profile
	I1207 20:45:53.963492  218292 start.go:298] selected driver: docker
	I1207 20:45:53.963517  218292 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-771944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-771944 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:45:53.963633  218292 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:45:53.964285  218292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:45:54.030673  218292 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-12-07 20:45:54.020346359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:45:54.031071  218292 cni.go:84] Creating CNI manager for ""
	I1207 20:45:54.031097  218292 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 20:45:54.031112  218292 start_flags.go:323] config:
	{Name:kubernetes-upgrade-771944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:kubernetes-upgrade-771944 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:45:54.033457  218292 out.go:177] * Starting control plane node kubernetes-upgrade-771944 in cluster kubernetes-upgrade-771944
	I1207 20:45:54.035225  218292 cache.go:121] Beginning downloading kic base image for docker with docker
	I1207 20:45:54.037860  218292 out.go:177] * Pulling base image ...
	I1207 20:45:54.039404  218292 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 20:45:54.039465  218292 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1207 20:45:54.039478  218292 cache.go:56] Caching tarball of preloaded images
	I1207 20:45:54.039479  218292 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local docker daemon
	I1207 20:45:54.039564  218292 preload.go:174] Found /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 20:45:54.039575  218292 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1207 20:45:54.039677  218292 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/config.json ...
	I1207 20:45:54.059479  218292 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local docker daemon, skipping pull
	I1207 20:45:54.059508  218292 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c exists in daemon, skipping load
	I1207 20:45:54.059524  218292 cache.go:194] Successfully downloaded all kic artifacts
	I1207 20:45:54.059570  218292 start.go:365] acquiring machines lock for kubernetes-upgrade-771944: {Name:mk9aeda420a221cf2b5428e219e9a4cbfa835ef4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 20:45:54.059648  218292 start.go:369] acquired machines lock for "kubernetes-upgrade-771944" in 47.269µs
	I1207 20:45:54.059676  218292 start.go:96] Skipping create...Using existing machine configuration
	I1207 20:45:54.059700  218292 fix.go:54] fixHost starting: 
	I1207 20:45:54.059986  218292 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-771944 --format={{.State.Status}}
	I1207 20:45:54.083045  218292 fix.go:102] recreateIfNeeded on kubernetes-upgrade-771944: state=Stopped err=<nil>
	W1207 20:45:54.083075  218292 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 20:45:54.085890  218292 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-771944" ...
	I1207 20:45:54.088188  218292 cli_runner.go:164] Run: docker start kubernetes-upgrade-771944
	I1207 20:45:54.427540  218292 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-771944 --format={{.State.Status}}
	I1207 20:45:54.454151  218292 kic.go:430] container "kubernetes-upgrade-771944" state is running.
	I1207 20:45:54.454523  218292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-771944
	I1207 20:45:54.483432  218292 profile.go:148] Saving config to /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/config.json ...
	I1207 20:45:54.484876  218292 machine.go:88] provisioning docker machine ...
	I1207 20:45:54.484900  218292 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-771944"
	I1207 20:45:54.484957  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:54.506686  218292 main.go:141] libmachine: Using SSH client type: native
	I1207 20:45:54.507101  218292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32986 <nil> <nil>}
	I1207 20:45:54.507121  218292 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-771944 && echo "kubernetes-upgrade-771944" | sudo tee /etc/hostname
	I1207 20:45:54.507815  218292 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1207 20:45:57.651400  218292 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-771944
	
	I1207 20:45:57.651489  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:57.670053  218292 main.go:141] libmachine: Using SSH client type: native
	I1207 20:45:57.670472  218292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32986 <nil> <nil>}
	I1207 20:45:57.670497  218292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-771944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-771944/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-771944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 20:45:57.797905  218292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:45:57.797935  218292 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17719-2292/.minikube CaCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17719-2292/.minikube}
	I1207 20:45:57.797957  218292 ubuntu.go:177] setting up certificates
	I1207 20:45:57.797966  218292 provision.go:83] configureAuth start
	I1207 20:45:57.798023  218292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-771944
	I1207 20:45:57.820834  218292 provision.go:138] copyHostCerts
	I1207 20:45:57.820898  218292 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-2292/.minikube/ca.pem, removing ...
	I1207 20:45:57.820930  218292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.pem
	I1207 20:45:57.821014  218292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/ca.pem (1078 bytes)
	I1207 20:45:57.821129  218292 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-2292/.minikube/cert.pem, removing ...
	I1207 20:45:57.821140  218292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-2292/.minikube/cert.pem
	I1207 20:45:57.821171  218292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/cert.pem (1123 bytes)
	I1207 20:45:57.821231  218292 exec_runner.go:144] found /home/jenkins/minikube-integration/17719-2292/.minikube/key.pem, removing ...
	I1207 20:45:57.821241  218292 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17719-2292/.minikube/key.pem
	I1207 20:45:57.821267  218292 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17719-2292/.minikube/key.pem (1679 bytes)
	I1207 20:45:57.821327  218292 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-771944 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-771944]
	I1207 20:45:58.263031  218292 provision.go:172] copyRemoteCerts
	I1207 20:45:58.263137  218292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 20:45:58.263186  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:58.280949  218292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32986 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/kubernetes-upgrade-771944/id_rsa Username:docker}
	I1207 20:45:58.379275  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1207 20:45:58.406761  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1207 20:45:58.435017  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 20:45:58.463153  218292 provision.go:86] duration metric: configureAuth took 665.173862ms
	I1207 20:45:58.463180  218292 ubuntu.go:193] setting minikube options for container-runtime
	I1207 20:45:58.463362  218292 config.go:182] Loaded profile config "kubernetes-upgrade-771944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1207 20:45:58.463423  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:58.481464  218292 main.go:141] libmachine: Using SSH client type: native
	I1207 20:45:58.481877  218292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32986 <nil> <nil>}
	I1207 20:45:58.481894  218292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1207 20:45:58.610465  218292 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1207 20:45:58.610492  218292 ubuntu.go:71] root file system type: overlay
	I1207 20:45:58.610609  218292 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1207 20:45:58.610678  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:58.629913  218292 main.go:141] libmachine: Using SSH client type: native
	I1207 20:45:58.630310  218292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32986 <nil> <nil>}
	I1207 20:45:58.630398  218292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1207 20:45:58.773270  218292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1207 20:45:58.773356  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:58.793678  218292 main.go:141] libmachine: Using SSH client type: native
	I1207 20:45:58.794103  218292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32986 <nil> <nil>}
	I1207 20:45:58.794129  218292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1207 20:45:58.927996  218292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 20:45:58.928058  218292 machine.go:91] provisioned docker machine in 4.443166027s
	I1207 20:45:58.928090  218292 start.go:300] post-start starting for "kubernetes-upgrade-771944" (driver="docker")
	I1207 20:45:58.928115  218292 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 20:45:58.928217  218292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 20:45:58.928291  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:58.949415  218292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32986 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/kubernetes-upgrade-771944/id_rsa Username:docker}
	I1207 20:45:59.047612  218292 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 20:45:59.051763  218292 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1207 20:45:59.051801  218292 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1207 20:45:59.051813  218292 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1207 20:45:59.051820  218292 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1207 20:45:59.051830  218292 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-2292/.minikube/addons for local assets ...
	I1207 20:45:59.051888  218292 filesync.go:126] Scanning /home/jenkins/minikube-integration/17719-2292/.minikube/files for local assets ...
	I1207 20:45:59.051970  218292 filesync.go:149] local asset: /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem -> 76002.pem in /etc/ssl/certs
	I1207 20:45:59.052072  218292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 20:45:59.062720  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem --> /etc/ssl/certs/76002.pem (1708 bytes)
	I1207 20:45:59.090824  218292 start.go:303] post-start completed in 162.707264ms
	I1207 20:45:59.090903  218292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 20:45:59.090948  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:59.108468  218292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32986 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/kubernetes-upgrade-771944/id_rsa Username:docker}
	I1207 20:45:59.202688  218292 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1207 20:45:59.208656  218292 fix.go:56] fixHost completed within 5.148961227s
	I1207 20:45:59.208691  218292 start.go:83] releasing machines lock for "kubernetes-upgrade-771944", held for 5.149028517s
	I1207 20:45:59.208803  218292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-771944
	I1207 20:45:59.228789  218292 ssh_runner.go:195] Run: cat /version.json
	I1207 20:45:59.228806  218292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 20:45:59.228845  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:59.228851  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:45:59.258690  218292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32986 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/kubernetes-upgrade-771944/id_rsa Username:docker}
	I1207 20:45:59.273958  218292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32986 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/kubernetes-upgrade-771944/id_rsa Username:docker}
	I1207 20:45:59.479726  218292 ssh_runner.go:195] Run: systemctl --version
	I1207 20:45:59.485338  218292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1207 20:45:59.491013  218292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1207 20:45:59.513529  218292 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1207 20:45:59.513608  218292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1207 20:45:59.533537  218292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1207 20:45:59.553868  218292 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 20:45:59.553912  218292 start.go:475] detecting cgroup driver to use...
	I1207 20:45:59.553961  218292 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1207 20:45:59.554075  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:45:59.575081  218292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1207 20:45:59.588165  218292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 20:45:59.600149  218292 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1207 20:45:59.600220  218292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1207 20:45:59.612031  218292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 20:45:59.624070  218292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 20:45:59.635842  218292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 20:45:59.647841  218292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 20:45:59.658906  218292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 20:45:59.671305  218292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 20:45:59.682839  218292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 20:45:59.693437  218292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:45:59.789531  218292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 20:45:59.907148  218292 start.go:475] detecting cgroup driver to use...
	I1207 20:45:59.907194  218292 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1207 20:45:59.907259  218292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1207 20:45:59.927478  218292 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1207 20:45:59.927555  218292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 20:45:59.942879  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 20:45:59.963212  218292 ssh_runner.go:195] Run: which cri-dockerd
	I1207 20:45:59.967719  218292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1207 20:45:59.978816  218292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1207 20:46:00.003840  218292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1207 20:46:00.158624  218292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1207 20:46:00.341670  218292 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1207 20:46:00.341857  218292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1207 20:46:00.373563  218292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:46:00.490126  218292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 20:46:00.903138  218292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 20:46:00.994812  218292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1207 20:46:01.100993  218292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 20:46:01.195267  218292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:46:01.294669  218292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1207 20:46:01.311767  218292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:46:01.414248  218292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1207 20:46:01.506757  218292 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1207 20:46:01.506827  218292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1207 20:46:01.512318  218292 start.go:543] Will wait 60s for crictl version
	I1207 20:46:01.512378  218292 ssh_runner.go:195] Run: which crictl
	I1207 20:46:01.517058  218292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 20:46:01.578494  218292 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1207 20:46:01.578568  218292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 20:46:01.606038  218292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 20:46:01.635725  218292 out.go:204] * Preparing Kubernetes v1.29.0-rc.1 on Docker 24.0.7 ...
	I1207 20:46:01.635832  218292 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-771944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1207 20:46:01.653556  218292 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1207 20:46:01.658249  218292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:46:01.671728  218292 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 20:46:01.671803  218292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 20:46:01.692558  218292 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1207 20:46:01.692578  218292 docker.go:677] registry.k8s.io/kube-apiserver:v1.29.0-rc.1 wasn't preloaded
	I1207 20:46:01.692632  218292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 20:46:01.703118  218292 ssh_runner.go:195] Run: which lz4
	I1207 20:46:01.707606  218292 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 20:46:01.712023  218292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 20:46:01.712058  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (341347986 bytes)
	I1207 20:46:05.480947  218292 docker.go:635] Took 3.773388 seconds to copy over tarball
	I1207 20:46:05.481013  218292 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 20:46:07.549277  218292 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.068237s)
	I1207 20:46:07.549308  218292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 20:46:07.692991  218292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 20:46:07.703788  218292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (4780 bytes)
	I1207 20:46:07.727677  218292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 20:46:07.874087  218292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 20:46:09.892503  218292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.018378867s)
	I1207 20:46:09.892591  218292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 20:46:09.914479  218292 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	registry.k8s.io/kube-proxy:v1.29.0-rc.1
	registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1207 20:46:09.914502  218292 cache_images.go:84] Images are preloaded, skipping loading
	I1207 20:46:09.914569  218292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1207 20:46:09.973628  218292 cni.go:84] Creating CNI manager for ""
	I1207 20:46:09.973651  218292 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 20:46:09.973676  218292 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 20:46:09.973697  218292 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-771944 NodeName:kubernetes-upgrade-771944 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 20:46:09.973834  218292 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-771944"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 20:46:09.973901  218292 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-771944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.1 ClusterName:kubernetes-upgrade-771944 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 20:46:09.973967  218292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.1
	I1207 20:46:09.985153  218292 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 20:46:09.985232  218292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 20:46:09.996315  218292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I1207 20:46:10.034687  218292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1207 20:46:10.064097  218292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2113 bytes)
	I1207 20:46:10.086923  218292 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1207 20:46:10.093483  218292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 20:46:10.110881  218292 certs.go:56] Setting up /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944 for IP: 192.168.67.2
	I1207 20:46:10.110982  218292 certs.go:190] acquiring lock for shared ca certs: {Name:mkf0aeb9e21068cbc2b0de52461bf1fef9a8e437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:46:10.111154  218292 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key
	I1207 20:46:10.111201  218292 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key
	I1207 20:46:10.111297  218292 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/client.key
	I1207 20:46:10.111368  218292 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/apiserver.key.c7fa3a9e
	I1207 20:46:10.111421  218292 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/proxy-client.key
	I1207 20:46:10.111557  218292 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/7600.pem (1338 bytes)
	W1207 20:46:10.111591  218292 certs.go:433] ignoring /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/7600_empty.pem, impossibly tiny 0 bytes
	I1207 20:46:10.111614  218292 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca-key.pem (1675 bytes)
	I1207 20:46:10.111647  218292 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/ca.pem (1078 bytes)
	I1207 20:46:10.111679  218292 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/cert.pem (1123 bytes)
	I1207 20:46:10.111709  218292 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/certs/home/jenkins/minikube-integration/17719-2292/.minikube/certs/key.pem (1679 bytes)
	I1207 20:46:10.111762  218292 certs.go:437] found cert: /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem (1708 bytes)
	I1207 20:46:10.113930  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 20:46:10.144507  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 20:46:10.174142  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 20:46:10.205365  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 20:46:10.234313  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 20:46:10.262791  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 20:46:10.290922  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 20:46:10.319510  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 20:46:10.347930  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/ssl/certs/76002.pem --> /usr/share/ca-certificates/76002.pem (1708 bytes)
	I1207 20:46:10.376210  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 20:46:10.404480  218292 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17719-2292/.minikube/certs/7600.pem --> /usr/share/ca-certificates/7600.pem (1338 bytes)
	I1207 20:46:10.434978  218292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 20:46:10.457357  218292 ssh_runner.go:195] Run: openssl version
	I1207 20:46:10.464584  218292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76002.pem && ln -fs /usr/share/ca-certificates/76002.pem /etc/ssl/certs/76002.pem"
	I1207 20:46:10.476417  218292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76002.pem
	I1207 20:46:10.481247  218292 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:07 /usr/share/ca-certificates/76002.pem
	I1207 20:46:10.481319  218292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76002.pem
	I1207 20:46:10.490204  218292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76002.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 20:46:10.501533  218292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 20:46:10.513810  218292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:46:10.518569  218292 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:46:10.518639  218292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 20:46:10.527367  218292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 20:46:10.538320  218292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7600.pem && ln -fs /usr/share/ca-certificates/7600.pem /etc/ssl/certs/7600.pem"
	I1207 20:46:10.549986  218292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7600.pem
	I1207 20:46:10.554739  218292 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:07 /usr/share/ca-certificates/7600.pem
	I1207 20:46:10.554810  218292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7600.pem
	I1207 20:46:10.563786  218292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7600.pem /etc/ssl/certs/51391683.0"
	I1207 20:46:10.574979  218292 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 20:46:10.579605  218292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1207 20:46:10.588010  218292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1207 20:46:10.596620  218292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1207 20:46:10.605164  218292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1207 20:46:10.613650  218292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1207 20:46:10.622265  218292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1207 20:46:10.630981  218292 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-771944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:kubernetes-upgrade-771944 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:46:10.631141  218292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 20:46:10.653967  218292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 20:46:10.665023  218292 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1207 20:46:10.665044  218292 kubeadm.go:636] restartCluster start
	I1207 20:46:10.665135  218292 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1207 20:46:10.675186  218292 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1207 20:46:10.675637  218292 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-771944" does not appear in /home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:46:10.675734  218292 kubeconfig.go:146] "kubernetes-upgrade-771944" context is missing from /home/jenkins/minikube-integration/17719-2292/kubeconfig - will repair!
	I1207 20:46:10.676025  218292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/kubeconfig: {Name:mkb58bbc3586feb84db8c4c89653a5136ccfc407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:46:10.676640  218292 kapi.go:59] client config for kubernetes-upgrade-771944: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/client.key", CAFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:46:10.677534  218292 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1207 20:46:10.688428  218292 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-12-07 20:45:15.150765602 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-12-07 20:46:10.078305029 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.67.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/dockershim.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-771944"
	   kubeletExtraArgs:
	     node-ip: 192.168.67.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-771944
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.29.0-rc.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1207 20:46:10.688450  218292 kubeadm.go:1135] stopping kube-system containers ...
	I1207 20:46:10.688510  218292 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 20:46:10.713408  218292 docker.go:469] Stopping containers: [035a46194530 4f6f3c30654b ca2c49b13ade 1146f9953c30 d06b2922193b 8fa295021b94 4e505c64c3a6 4eee7793dcda ef0a427d34ba bea752944ab5 7b3ed4313617 73b2813d5397 ac34341d3d6b]
	I1207 20:46:10.713492  218292 ssh_runner.go:195] Run: docker stop 035a46194530 4f6f3c30654b ca2c49b13ade 1146f9953c30 d06b2922193b 8fa295021b94 4e505c64c3a6 4eee7793dcda ef0a427d34ba bea752944ab5 7b3ed4313617 73b2813d5397 ac34341d3d6b
	I1207 20:46:10.736824  218292 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1207 20:46:10.752043  218292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 20:46:10.762962  218292 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Dec  7 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Dec  7 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5819 Dec  7 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Dec  7 20:45 /etc/kubernetes/scheduler.conf
	
	I1207 20:46:10.763084  218292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1207 20:46:10.773858  218292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1207 20:46:10.784460  218292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1207 20:46:10.795709  218292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1207 20:46:10.806331  218292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 20:46:10.816808  218292 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1207 20:46:10.816840  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:46:10.876666  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:46:14.947034  218292 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.070276593s)
	I1207 20:46:14.947061  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:46:15.159509  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:46:15.244989  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:46:15.347516  218292 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:46:15.347590  218292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:46:15.365278  218292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:46:15.879129  218292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:46:16.378919  218292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:46:16.408525  218292 api_server.go:72] duration metric: took 1.061008972s to wait for apiserver process to appear ...
	I1207 20:46:16.408554  218292 api_server.go:88] waiting for apiserver healthz status ...
	I1207 20:46:16.408572  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:16.408847  218292 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1207 20:46:16.408874  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:16.409004  218292 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1207 20:46:16.909631  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:20.910099  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 20:46:20.910126  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 20:46:20.910137  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:20.980603  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1207 20:46:20.980629  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1207 20:46:21.409849  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:21.418527  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:21.418566  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:21.909899  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:21.919451  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:21.919480  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:22.410100  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:22.418676  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:22.418705  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:22.909894  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:22.918196  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:22.918225  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:23.409852  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:25.419378  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:25.419409  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:25.419426  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:25.428504  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:25.428535  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:25.909935  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:25.918277  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:25.918307  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:26.409902  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:26.418431  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:26.418461  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:26.909909  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:26.918583  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:26.918615  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:27.409866  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:29.419338  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:29.419374  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:29.419387  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:29.427691  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:29.427719  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:29.909188  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:29.917374  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:29.917404  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:30.409899  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:30.417993  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:30.418030  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:30.909721  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:30.917823  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:30.917849  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:31.409363  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:33.418547  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:33.418579  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:33.418592  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:35.427499  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:35.427562  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:35.427580  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:37.436000  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:37.436030  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:37.436051  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:37.444093  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:37.444125  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:37.909698  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:39.918786  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:39.918818  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:39.918833  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:41.927463  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:41.927496  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:41.927516  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:43.936756  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:43.936788  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:43.936805  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:45.945842  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:45.945874  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:45.945889  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:47.954662  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:47.954694  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:47.954717  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:49.964036  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:49.964074  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:49.964168  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:49.972327  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:49.972357  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:50.409892  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:52.419231  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:52.419268  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:52.419281  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:54.428303  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:54.428338  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:54.428356  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:56.437545  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:56.437578  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:56.437596  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:58.447055  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:58.447089  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:58.447102  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:58.455085  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:58.455127  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:58.909550  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:58.917876  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:58.917907  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:59.409164  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:59.417267  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:59.417294  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:46:59.909869  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:46:59.917978  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:46:59.918004  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:00.409259  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:02.418635  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:02.418671  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:02.418684  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:04.427208  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:04.427238  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:04.427253  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:04.437514  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:04.437548  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:04.909840  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:05.424479  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:05.424514  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:05.424534  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:05.434117  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:05.434151  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:05.909680  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:05.919818  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:05.919854  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:06.409446  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:06.418324  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:06.418363  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:06.909861  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:06.918407  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:06.918439  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:07.409874  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:07.418298  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:07.418335  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:07.909941  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:07.919382  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1207 20:47:07.919421  218292 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1207 20:47:08.409843  218292 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1207 20:47:08.419406  218292 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1207 20:47:08.441495  218292 api_server.go:141] control plane version: v1.29.0-rc.1
	I1207 20:47:08.441529  218292 api_server.go:131] duration metric: took 52.032966584s to wait for apiserver health ...
	I1207 20:47:08.441539  218292 cni.go:84] Creating CNI manager for ""
	I1207 20:47:08.441562  218292 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 20:47:08.443770  218292 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 20:47:08.445637  218292 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 20:47:08.467585  218292 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 20:47:08.493801  218292 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 20:47:08.505006  218292 system_pods.go:59] 7 kube-system pods found
	I1207 20:47:08.505042  218292 system_pods.go:61] "coredns-5644d7b6d9-95lmh" [1a3b64b9-9585-43f4-9316-dcb6a036dd2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 20:47:08.505051  218292 system_pods.go:61] "etcd-kubernetes-upgrade-771944" [2376babf-a7bb-43de-8792-2c56754dfe4f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1207 20:47:08.505061  218292 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-771944" [ed6ee2f9-fd4d-4714-8752-2e9f11baaa0e] Pending
	I1207 20:47:08.505071  218292 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-771944" [dd4d17c7-eb69-4b0b-be9e-02ba7718a9b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1207 20:47:08.505080  218292 system_pods.go:61] "kube-proxy-wxwwj" [32d1dbbc-8528-41c0-a21e-ced9588f4bff] Running
	I1207 20:47:08.505086  218292 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-771944" [4a11c7c9-3bc6-402a-933b-533512c93af7] Running
	I1207 20:47:08.505092  218292 system_pods.go:61] "storage-provisioner" [fe26a590-b442-46ca-8091-ede8aada93df] Running
	I1207 20:47:08.505111  218292 system_pods.go:74] duration metric: took 11.283721ms to wait for pod list to return data ...
	I1207 20:47:08.505119  218292 node_conditions.go:102] verifying NodePressure condition ...
	I1207 20:47:08.509605  218292 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1207 20:47:08.509638  218292 node_conditions.go:123] node cpu capacity is 2
	I1207 20:47:08.509649  218292 node_conditions.go:105] duration metric: took 4.51769ms to run NodePressure ...
	I1207 20:47:08.509675  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1207 20:47:09.380893  218292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 20:47:09.390840  218292 ops.go:34] apiserver oom_adj: -16
	I1207 20:47:09.390859  218292 kubeadm.go:640] restartCluster took 58.725807662s
	I1207 20:47:09.390869  218292 kubeadm.go:406] StartCluster complete in 58.759906896s
	I1207 20:47:09.390884  218292 settings.go:142] acquiring lock: {Name:mk4e1ad85078db32f53ce2cb878f95b1dc79d720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:47:09.390941  218292 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:47:09.391617  218292 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17719-2292/kubeconfig: {Name:mkb58bbc3586feb84db8c4c89653a5136ccfc407 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 20:47:09.392345  218292 kapi.go:59] client config for kubernetes-upgrade-771944: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/client.key", CAFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:47:09.392902  218292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 20:47:09.393047  218292 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 20:47:09.393138  218292 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-771944"
	I1207 20:47:09.393153  218292 addons.go:231] Setting addon storage-provisioner=true in "kubernetes-upgrade-771944"
	W1207 20:47:09.393160  218292 addons.go:240] addon storage-provisioner should already be in state true
	I1207 20:47:09.393090  218292 config.go:182] Loaded profile config "kubernetes-upgrade-771944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1207 20:47:09.393229  218292 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-771944"
	I1207 20:47:09.393257  218292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-771944"
	I1207 20:47:09.393238  218292 host.go:66] Checking if "kubernetes-upgrade-771944" exists ...
	I1207 20:47:09.393596  218292 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-771944 --format={{.State.Status}}
	I1207 20:47:09.393832  218292 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-771944 --format={{.State.Status}}
	I1207 20:47:09.397128  218292 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-771944" context rescaled to 1 replicas
	I1207 20:47:09.397213  218292 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 20:47:09.399262  218292 out.go:177] * Verifying Kubernetes components...
	I1207 20:47:09.401038  218292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:47:09.440556  218292 kapi.go:59] client config for kubernetes-upgrade-771944: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/client.crt", KeyFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubernetes-upgrade-771944/client.key", CAFile:"/home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6350), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 20:47:09.442285  218292 addons.go:231] Setting addon default-storageclass=true in "kubernetes-upgrade-771944"
	W1207 20:47:09.442313  218292 addons.go:240] addon default-storageclass should already be in state true
	I1207 20:47:09.442371  218292 host.go:66] Checking if "kubernetes-upgrade-771944" exists ...
	I1207 20:47:09.442858  218292 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-771944 --format={{.State.Status}}
	I1207 20:47:09.446855  218292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 20:47:09.448760  218292 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:47:09.448782  218292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 20:47:09.448846  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:47:09.492992  218292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32986 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/kubernetes-upgrade-771944/id_rsa Username:docker}
	I1207 20:47:09.497137  218292 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 20:47:09.497158  218292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 20:47:09.497225  218292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-771944
	I1207 20:47:09.526464  218292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32986 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/kubernetes-upgrade-771944/id_rsa Username:docker}
	I1207 20:47:09.625513  218292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 20:47:09.681101  218292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 20:47:09.694936  218292 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1207 20:47:09.695034  218292 api_server.go:52] waiting for apiserver process to appear ...
	I1207 20:47:09.695114  218292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p stopped-upgrade-187904"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.17.0 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.41s)

                                                
                                    

Test pass (299/330)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.76
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 13.19
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.1/json-events 11.43
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.18
23 TestDownloadOnly/DeleteAll 0.33
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
26 TestBinaryMirror 0.62
27 TestOffline 103.88
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
32 TestAddons/Setup 145.27
34 TestAddons/parallel/Registry 14.74
36 TestAddons/parallel/InspektorGadget 10.93
37 TestAddons/parallel/MetricsServer 5.95
40 TestAddons/parallel/CSI 53.47
41 TestAddons/parallel/Headlamp 12.19
42 TestAddons/parallel/CloudSpanner 5.55
43 TestAddons/parallel/LocalPath 53.14
44 TestAddons/parallel/NvidiaDevicePlugin 5.5
47 TestAddons/serial/GCPAuth/Namespaces 0.18
48 TestAddons/StoppedEnableDisable 11.24
49 TestCertOptions 39.08
50 TestCertExpiration 251.04
51 TestDockerFlags 48.74
52 TestForceSystemdFlag 45.9
53 TestForceSystemdEnv 47.23
59 TestErrorSpam/setup 32.91
60 TestErrorSpam/start 0.86
61 TestErrorSpam/status 1.12
62 TestErrorSpam/pause 1.42
63 TestErrorSpam/unpause 1.64
64 TestErrorSpam/stop 11.01
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 77.02
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 34.97
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.96
76 TestFunctional/serial/CacheCmd/cache/add_local 0.99
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
81 TestFunctional/serial/CacheCmd/cache/delete 0.15
82 TestFunctional/serial/MinikubeKubectlCmd 0.15
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
84 TestFunctional/serial/ExtraConfig 43.27
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.31
87 TestFunctional/serial/LogsFileCmd 1.31
88 TestFunctional/serial/InvalidService 5
90 TestFunctional/parallel/ConfigCmd 0.64
91 TestFunctional/parallel/DashboardCmd 12.12
92 TestFunctional/parallel/DryRun 0.75
93 TestFunctional/parallel/InternationalLanguage 0.35
94 TestFunctional/parallel/StatusCmd 1.36
98 TestFunctional/parallel/ServiceCmdConnect 7.8
99 TestFunctional/parallel/AddonsCmd 0.23
100 TestFunctional/parallel/PersistentVolumeClaim 26.12
102 TestFunctional/parallel/SSHCmd 0.78
103 TestFunctional/parallel/CpCmd 1.56
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 2.47
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
114 TestFunctional/parallel/License 0.41
115 TestFunctional/parallel/Version/short 0.14
116 TestFunctional/parallel/Version/components 1.13
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.93
122 TestFunctional/parallel/ImageCommands/Setup 1.81
123 TestFunctional/parallel/DockerEnv/bash 1.42
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.47
128 TestFunctional/parallel/ServiceCmd/DeployApp 10.33
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.98
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.41
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.1
132 TestFunctional/parallel/ServiceCmd/List 0.49
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.91
137 TestFunctional/parallel/ServiceCmd/Format 0.61
138 TestFunctional/parallel/ServiceCmd/URL 0.59
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.4
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.83
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.67
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
152 TestFunctional/parallel/ProfileCmd/profile_list 0.63
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
154 TestFunctional/parallel/MountCmd/any-port 7.88
155 TestFunctional/parallel/MountCmd/specific-port 2.46
156 TestFunctional/parallel/MountCmd/VerifyCleanup 3.02
157 TestFunctional/delete_addon-resizer_images 0.11
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestImageBuild/serial/Setup 33.77
164 TestImageBuild/serial/NormalBuild 1.82
165 TestImageBuild/serial/BuildWithBuildArg 0.94
166 TestImageBuild/serial/BuildWithDockerIgnore 0.75
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.74
170 TestIngressAddonLegacy/StartLegacyK8sCluster 113.21
172 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.05
173 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 1.79
177 TestJSONOutput/start/Command 89.9
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.65
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.58
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 5.8
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.27
202 TestKicCustomNetwork/create_custom_network 38.15
203 TestKicCustomNetwork/use_default_bridge_network 35.36
204 TestKicExistingNetwork 34.15
205 TestKicCustomSubnet 36.47
206 TestKicStaticIP 35.48
207 TestMainNoArgs 0.08
208 TestMinikubeProfile 74.34
211 TestMountStart/serial/StartWithMountFirst 8.04
212 TestMountStart/serial/VerifyMountFirst 0.3
213 TestMountStart/serial/StartWithMountSecond 8.28
214 TestMountStart/serial/VerifyMountSecond 0.3
215 TestMountStart/serial/DeleteFirst 1.53
216 TestMountStart/serial/VerifyMountPostDelete 0.3
217 TestMountStart/serial/Stop 1.24
218 TestMountStart/serial/RestartStopped 9.42
219 TestMountStart/serial/VerifyMountPostStop 0.29
222 TestMultiNode/serial/FreshStart2Nodes 80.7
223 TestMultiNode/serial/DeployApp2Nodes 55.83
224 TestMultiNode/serial/PingHostFrom2Pods 1.29
225 TestMultiNode/serial/AddNode 21.71
226 TestMultiNode/serial/MultiNodeLabels 0.09
227 TestMultiNode/serial/ProfileList 0.35
228 TestMultiNode/serial/CopyFile 11.54
229 TestMultiNode/serial/StopNode 2.41
230 TestMultiNode/serial/StartAfterStop 14.41
231 TestMultiNode/serial/RestartKeepsNodes 120.63
232 TestMultiNode/serial/DeleteNode 5.32
233 TestMultiNode/serial/StopMultiNode 21.95
234 TestMultiNode/serial/RestartMultiNode 84.77
235 TestMultiNode/serial/ValidateNameConflict 36.1
240 TestPreload 180.72
242 TestScheduledStopUnix 109
243 TestSkaffold 109.71
245 TestInsufficientStorage 11.77
246 TestRunningBinaryUpgrade 135.59
248 TestKubernetesUpgrade 192.45
249 TestMissingContainerUpgrade 117.88
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
252 TestNoKubernetes/serial/StartWithK8s 47.11
253 TestNoKubernetes/serial/StartWithStopK8s 17.78
254 TestNoKubernetes/serial/Start 7.29
255 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
256 TestNoKubernetes/serial/ProfileList 1.01
257 TestNoKubernetes/serial/Stop 1.25
258 TestNoKubernetes/serial/StartNoArgs 8.68
259 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
271 TestStoppedBinaryUpgrade/Setup 1.08
282 TestPause/serial/Start 57.54
283 TestPause/serial/SecondStartNoReconfiguration 39.88
284 TestPause/serial/Pause 0.99
285 TestPause/serial/VerifyStatus 0.38
286 TestPause/serial/Unpause 0.73
287 TestPause/serial/PauseAgain 1.24
288 TestNetworkPlugins/group/auto/Start 97.48
289 TestPause/serial/DeletePaused 2.85
290 TestPause/serial/VerifyDeletedResources 0.3
291 TestNetworkPlugins/group/kindnet/Start 71.12
292 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
294 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
295 TestNetworkPlugins/group/kindnet/DNS 0.24
296 TestNetworkPlugins/group/kindnet/Localhost 0.19
297 TestNetworkPlugins/group/kindnet/HairPin 0.2
298 TestNetworkPlugins/group/auto/KubeletFlags 0.37
299 TestNetworkPlugins/group/auto/NetCatPod 10.61
300 TestNetworkPlugins/group/auto/DNS 0.3
301 TestNetworkPlugins/group/auto/Localhost 0.25
302 TestNetworkPlugins/group/auto/HairPin 0.26
303 TestNetworkPlugins/group/calico/Start 85.64
304 TestNetworkPlugins/group/custom-flannel/Start 72.32
305 TestNetworkPlugins/group/calico/ControllerPod 5.04
306 TestNetworkPlugins/group/calico/KubeletFlags 0.36
307 TestNetworkPlugins/group/calico/NetCatPod 12.5
308 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
309 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.53
310 TestNetworkPlugins/group/custom-flannel/DNS 0.32
311 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
312 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
313 TestNetworkPlugins/group/calico/DNS 0.41
314 TestNetworkPlugins/group/calico/Localhost 0.21
315 TestNetworkPlugins/group/calico/HairPin 0.19
316 TestNetworkPlugins/group/false/Start 94.75
317 TestNetworkPlugins/group/enable-default-cni/Start 57.99
318 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
319 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.38
320 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
321 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
322 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
323 TestNetworkPlugins/group/flannel/Start 71.24
324 TestNetworkPlugins/group/false/KubeletFlags 0.41
325 TestNetworkPlugins/group/false/NetCatPod 13.48
326 TestNetworkPlugins/group/false/DNS 0.26
327 TestNetworkPlugins/group/false/Localhost 0.23
328 TestNetworkPlugins/group/false/HairPin 0.28
329 TestNetworkPlugins/group/bridge/Start 53.92
330 TestNetworkPlugins/group/flannel/ControllerPod 5.03
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.52
332 TestNetworkPlugins/group/flannel/NetCatPod 11.47
333 TestNetworkPlugins/group/flannel/DNS 0.25
334 TestNetworkPlugins/group/flannel/Localhost 0.23
335 TestNetworkPlugins/group/flannel/HairPin 0.2
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.53
337 TestNetworkPlugins/group/bridge/NetCatPod 12.56
338 TestNetworkPlugins/group/bridge/DNS 26.39
339 TestNetworkPlugins/group/kubenet/Start 87.17
340 TestNetworkPlugins/group/bridge/Localhost 0.23
341 TestNetworkPlugins/group/bridge/HairPin 0.19
343 TestStartStop/group/old-k8s-version/serial/FirstStart 125.27
344 TestNetworkPlugins/group/kubenet/KubeletFlags 0.47
345 TestNetworkPlugins/group/kubenet/NetCatPod 11.42
346 TestNetworkPlugins/group/kubenet/DNS 0.23
347 TestNetworkPlugins/group/kubenet/Localhost 0.36
348 TestNetworkPlugins/group/kubenet/HairPin 0.27
350 TestStartStop/group/no-preload/serial/FirstStart 56.37
351 TestStartStop/group/old-k8s-version/serial/DeployApp 8.56
352 TestStartStop/group/no-preload/serial/DeployApp 9.08
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
354 TestStartStop/group/old-k8s-version/serial/Stop 11.07
355 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
356 TestStartStop/group/no-preload/serial/Stop 11.1
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/old-k8s-version/serial/SecondStart 445.62
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.35
360 TestStartStop/group/no-preload/serial/SecondStart 323.2
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 15.03
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
363 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
364 TestStartStop/group/no-preload/serial/Pause 5.09
366 TestStartStop/group/embed-certs/serial/FirstStart 58.35
367 TestStartStop/group/embed-certs/serial/DeployApp 8.52
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
369 TestStartStop/group/embed-certs/serial/Stop 11.01
370 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/embed-certs/serial/SecondStart 351.31
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/old-k8s-version/serial/Pause 3.54
377 TestStartStop/group/newest-cni/serial/FirstStart 52.32
378 TestStartStop/group/newest-cni/serial/DeployApp 0
379 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
380 TestStartStop/group/newest-cni/serial/Stop 11
381 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
382 TestStartStop/group/newest-cni/serial/SecondStart 33.91
383 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
386 TestStartStop/group/newest-cni/serial/Pause 3.6
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.93
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.03
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 321.24
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 15.09
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
397 TestStartStop/group/embed-certs/serial/Pause 3.28
398 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.03
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
x
+
TestDownloadOnly/v1.16.0/json-events (14.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-482552 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-482552 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (14.758042605s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-482552
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-482552: exit status 85 (79.238065ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-482552 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-482552        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:01:08
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:01:08.843851    7606 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:01:08.844086    7606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:08.844111    7606 out.go:309] Setting ErrFile to fd 2...
	I1207 20:01:08.844132    7606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:08.844425    7606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	W1207 20:01:08.844616    7606 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17719-2292/.minikube/config/config.json: open /home/jenkins/minikube-integration/17719-2292/.minikube/config/config.json: no such file or directory
	I1207 20:01:08.845227    7606 out.go:303] Setting JSON to true
	I1207 20:01:08.846045    7606 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":2612,"bootTime":1701976657,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1207 20:01:08.846140    7606 start.go:138] virtualization:  
	I1207 20:01:08.848996    7606 out.go:97] [download-only-482552] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W1207 20:01:08.849229    7606 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball: no such file or directory
	I1207 20:01:08.849347    7606 notify.go:220] Checking for updates...
	I1207 20:01:08.851732    7606 out.go:169] MINIKUBE_LOCATION=17719
	I1207 20:01:08.853678    7606 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:01:08.855397    7606 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:01:08.857135    7606 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	I1207 20:01:08.859161    7606 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1207 20:01:08.862448    7606 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 20:01:08.862716    7606 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:01:08.886413    7606 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1207 20:01:08.886524    7606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:01:09.215097    7606 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-07 20:01:09.205088193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:01:09.215203    7606 docker.go:295] overlay module found
	I1207 20:01:09.217009    7606 out.go:97] Using the docker driver based on user configuration
	I1207 20:01:09.217038    7606 start.go:298] selected driver: docker
	I1207 20:01:09.217050    7606 start.go:902] validating driver "docker" against <nil>
	I1207 20:01:09.217144    7606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:01:09.306969    7606 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-12-07 20:01:09.29779647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:01:09.307136    7606 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 20:01:09.307426    7606 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1207 20:01:09.307595    7606 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 20:01:09.309296    7606 out.go:169] Using Docker driver with root privileges
	I1207 20:01:09.310778    7606 cni.go:84] Creating CNI manager for ""
	I1207 20:01:09.310806    7606 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 20:01:09.310817    7606 start_flags.go:323] config:
	{Name:download-only-482552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-482552 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:01:09.312728    7606 out.go:97] Starting control plane node download-only-482552 in cluster download-only-482552
	I1207 20:01:09.312762    7606 cache.go:121] Beginning downloading kic base image for docker with docker
	I1207 20:01:09.314368    7606 out.go:97] Pulling base image ...
	I1207 20:01:09.314397    7606 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 20:01:09.314590    7606 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local docker daemon
	I1207 20:01:09.334353    7606 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c to local cache
	I1207 20:01:09.334539    7606 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local cache directory
	I1207 20:01:09.334659    7606 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c to local cache
	I1207 20:01:09.399553    7606 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 20:01:09.399585    7606 cache.go:56] Caching tarball of preloaded images
	I1207 20:01:09.399758    7606 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 20:01:09.402260    7606 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1207 20:01:09.402289    7606 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1207 20:01:09.533310    7606 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 20:01:15.663395    7606 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-482552"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-482552 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-482552 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.18645853s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-482552
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-482552: exit status 85 (83.300234ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-482552 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-482552        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-482552 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-482552        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:01:23
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:01:23.683615    7678 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:01:23.683840    7678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:23.683864    7678 out.go:309] Setting ErrFile to fd 2...
	I1207 20:01:23.683883    7678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:23.684198    7678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	W1207 20:01:23.684380    7678 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17719-2292/.minikube/config/config.json: open /home/jenkins/minikube-integration/17719-2292/.minikube/config/config.json: no such file or directory
	I1207 20:01:23.684671    7678 out.go:303] Setting JSON to true
	I1207 20:01:23.685822    7678 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":2627,"bootTime":1701976657,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1207 20:01:23.685920    7678 start.go:138] virtualization:  
	I1207 20:01:23.688294    7678 out.go:97] [download-only-482552] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1207 20:01:23.690375    7678 out.go:169] MINIKUBE_LOCATION=17719
	I1207 20:01:23.688532    7678 notify.go:220] Checking for updates...
	I1207 20:01:23.693614    7678 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:01:23.695377    7678 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:01:23.697070    7678 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	I1207 20:01:23.698725    7678 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1207 20:01:23.701878    7678 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 20:01:23.702378    7678 config.go:182] Loaded profile config "download-only-482552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1207 20:01:23.702428    7678 start.go:810] api.Load failed for download-only-482552: filestore "download-only-482552": Docker machine "download-only-482552" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 20:01:23.702546    7678 driver.go:392] Setting default libvirt URI to qemu:///system
	W1207 20:01:23.702576    7678 start.go:810] api.Load failed for download-only-482552: filestore "download-only-482552": Docker machine "download-only-482552" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 20:01:23.727150    7678 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1207 20:01:23.727258    7678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:01:23.820213    7678 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-07 20:01:23.807976341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:01:23.820317    7678 docker.go:295] overlay module found
	I1207 20:01:23.821880    7678 out.go:97] Using the docker driver based on existing profile
	I1207 20:01:23.821904    7678 start.go:298] selected driver: docker
	I1207 20:01:23.821911    7678 start.go:902] validating driver "docker" against &{Name:download-only-482552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-482552 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:01:23.822088    7678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:01:23.898130    7678 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-07 20:01:23.888755702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:01:23.898560    7678 cni.go:84] Creating CNI manager for ""
	I1207 20:01:23.898585    7678 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 20:01:23.898597    7678 start_flags.go:323] config:
	{Name:download-only-482552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-482552 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GP
Us:}
	I1207 20:01:23.901122    7678 out.go:97] Starting control plane node download-only-482552 in cluster download-only-482552
	I1207 20:01:23.901153    7678 cache.go:121] Beginning downloading kic base image for docker with docker
	I1207 20:01:23.902703    7678 out.go:97] Pulling base image ...
	I1207 20:01:23.902749    7678 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 20:01:23.902919    7678 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local docker daemon
	I1207 20:01:23.920252    7678 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c to local cache
	I1207 20:01:23.920386    7678 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local cache directory
	I1207 20:01:23.920408    7678 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local cache directory, skipping pull
	I1207 20:01:23.920413    7678 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c exists in cache, skipping pull
	I1207 20:01:23.920423    7678 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c as a tarball
	I1207 20:01:23.982653    7678 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 20:01:23.982696    7678 cache.go:56] Caching tarball of preloaded images
	I1207 20:01:23.982874    7678 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 20:01:23.984719    7678 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1207 20:01:23.984737    7678 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1207 20:01:24.102211    7678 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-482552"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (11.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-482552 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-482552 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.428238733s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (11.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-482552
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-482552: exit status 85 (176.336486ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-482552 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-482552           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-482552 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-482552           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-482552 | jenkins | v1.32.0 | 07 Dec 23 20:01 UTC |          |
	|         | -p download-only-482552           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 20:01:36
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 20:01:36.955838    7752 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:01:36.955991    7752 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:36.956001    7752 out.go:309] Setting ErrFile to fd 2...
	I1207 20:01:36.956008    7752 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:01:36.956250    7752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	W1207 20:01:36.956373    7752 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17719-2292/.minikube/config/config.json: open /home/jenkins/minikube-integration/17719-2292/.minikube/config/config.json: no such file or directory
	I1207 20:01:36.956610    7752 out.go:303] Setting JSON to true
	I1207 20:01:36.957423    7752 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":2640,"bootTime":1701976657,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1207 20:01:36.957492    7752 start.go:138] virtualization:  
	I1207 20:01:36.959678    7752 out.go:97] [download-only-482552] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1207 20:01:36.961368    7752 out.go:169] MINIKUBE_LOCATION=17719
	I1207 20:01:36.959958    7752 notify.go:220] Checking for updates...
	I1207 20:01:36.965455    7752 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:01:36.968389    7752 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:01:36.970078    7752 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	I1207 20:01:36.971899    7752 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1207 20:01:36.975815    7752 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 20:01:36.976459    7752 config.go:182] Loaded profile config "download-only-482552": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1207 20:01:36.976560    7752 start.go:810] api.Load failed for download-only-482552: filestore "download-only-482552": Docker machine "download-only-482552" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 20:01:36.976727    7752 driver.go:392] Setting default libvirt URI to qemu:///system
	W1207 20:01:36.976762    7752 start.go:810] api.Load failed for download-only-482552: filestore "download-only-482552": Docker machine "download-only-482552" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 20:01:37.000361    7752 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1207 20:01:37.000478    7752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:01:37.106361    7752 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-07 20:01:37.09678042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:01:37.106459    7752 docker.go:295] overlay module found
	I1207 20:01:37.109293    7752 out.go:97] Using the docker driver based on existing profile
	I1207 20:01:37.109348    7752 start.go:298] selected driver: docker
	I1207 20:01:37.109358    7752 start.go:902] validating driver "docker" against &{Name:download-only-482552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-482552 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:01:37.109538    7752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:01:37.178105    7752 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-12-07 20:01:37.169006662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:01:37.178601    7752 cni.go:84] Creating CNI manager for ""
	I1207 20:01:37.178625    7752 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 20:01:37.178640    7752 start_flags.go:323] config:
	{Name:download-only-482552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-482552 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m
0s GPUs:}
	I1207 20:01:37.180796    7752 out.go:97] Starting control plane node download-only-482552 in cluster download-only-482552
	I1207 20:01:37.180825    7752 cache.go:121] Beginning downloading kic base image for docker with docker
	I1207 20:01:37.182993    7752 out.go:97] Pulling base image ...
	I1207 20:01:37.183016    7752 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 20:01:37.183180    7752 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local docker daemon
	I1207 20:01:37.200696    7752 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c to local cache
	I1207 20:01:37.200831    7752 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local cache directory
	I1207 20:01:37.200853    7752 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c in local cache directory, skipping pull
	I1207 20:01:37.200861    7752 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c exists in cache, skipping pull
	I1207 20:01:37.200868    7752 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c as a tarball
	I1207 20:01:37.253788    7752 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1207 20:01:37.253825    7752 cache.go:56] Caching tarball of preloaded images
	I1207 20:01:37.253976    7752 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 20:01:37.255952    7752 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1207 20:01:37.255973    7752 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I1207 20:01:37.386340    7752 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4?checksum=md5:e6c70ba8af96149bcd57a348676cbfba -> /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1207 20:01:46.786944    7752 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I1207 20:01:46.787050    7752 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17719-2292/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-482552"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-482552
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-439095 --alsologtostderr --binary-mirror http://127.0.0.1:35697 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-439095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-439095
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (103.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-242133 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-242133 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m41.39185597s)
helpers_test.go:175: Cleaning up "offline-docker-242133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-242133
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-242133: (2.487612129s)
--- PASS: TestOffline (103.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-946218
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-946218: exit status 85 (89.067218ms)

                                                
                                                
-- stdout --
	* Profile "addons-946218" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-946218"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-946218
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-946218: exit status 85 (93.787917ms)

                                                
                                                
-- stdout --
	* Profile "addons-946218" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-946218"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (145.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-946218 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-946218 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m25.266015552s)
--- PASS: TestAddons/Setup (145.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 45.405768ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vbggm" [f9501618-888e-41c1-87bc-c0c145626641] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016261368s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-88zd5" [05cfdd65-400d-46d7-a81d-b22181d9c3d1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015026127s
addons_test.go:339: (dbg) Run:  kubectl --context addons-946218 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-946218 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-946218 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.452818026s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 ip
2023/12/07 20:04:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5pktf" [b2fdf085-9d52-4a64-98e3-c9ff2e6c7012] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012330537s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-946218
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-946218: (5.916580394s)
--- PASS: TestAddons/parallel/InspektorGadget (10.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.875881ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-hc4mm" [796d70f9-0a3c-4906-923f-5239ec4a547f] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015583279s
addons_test.go:414: (dbg) Run:  kubectl --context addons-946218 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 45.361699ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-946218 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-946218 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dbbd1e61-e213-4ba3-9d15-c8b319157d0e] Pending
helpers_test.go:344: "task-pv-pod" [dbbd1e61-e213-4ba3-9d15-c8b319157d0e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dbbd1e61-e213-4ba3-9d15-c8b319157d0e] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.025797301s
addons_test.go:583: (dbg) Run:  kubectl --context addons-946218 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-946218 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-946218 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-946218 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-946218 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-946218 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-946218 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1bd8e1d7-92fa-48fc-9729-8b458549ac51] Pending
helpers_test.go:344: "task-pv-pod-restore" [1bd8e1d7-92fa-48fc-9729-8b458549ac51] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1bd8e1d7-92fa-48fc-9729-8b458549ac51] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.042106126s
addons_test.go:625: (dbg) Run:  kubectl --context addons-946218 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-946218 delete pod task-pv-pod-restore: (1.193423902s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-946218 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-946218 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-946218 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.905526992s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-946218 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-946218 --alsologtostderr -v=1: (1.145079853s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-7kf7z" [29fb3e96-c4d4-4544-8b89-77ae5710e1f3] Pending
helpers_test.go:344: "headlamp-777fd4b855-7kf7z" [29fb3e96-c4d4-4544-8b89-77ae5710e1f3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-7kf7z" [29fb3e96-c4d4-4544-8b89-77ae5710e1f3] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.040666426s
--- PASS: TestAddons/parallel/Headlamp (12.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-mx6tg" [f887b03d-3ce2-4ce8-996b-a975375c765a] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013952402s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-946218
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-946218 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-946218 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-946218 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d5896da7-e14c-403f-816f-595ebef40cdb] Pending
helpers_test.go:344: "test-local-path" [d5896da7-e14c-403f-816f-595ebef40cdb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d5896da7-e14c-403f-816f-595ebef40cdb] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d5896da7-e14c-403f-816f-595ebef40cdb] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010235866s
addons_test.go:890: (dbg) Run:  kubectl --context addons-946218 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 ssh "cat /opt/local-path-provisioner/pvc-6224022a-bf0c-43f9-b398-1fc2163a085b_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-946218 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-946218 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-946218 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-946218 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.170215662s)
--- PASS: TestAddons/parallel/LocalPath (53.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pq9kj" [c8d63810-cef8-46ea-8b3f-4a331c68a9ce] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.011966378s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-946218
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-946218 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-946218 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-946218
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-946218: (10.92701428s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-946218
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-946218
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-946218
--- PASS: TestAddons/StoppedEnableDisable (11.24s)

                                                
                                    
x
+
TestCertOptions (39.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-977678 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-977678 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (36.096689398s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-977678 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-977678 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-977678 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-977678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-977678
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-977678: (2.245359076s)
--- PASS: TestCertOptions (39.08s)

                                                
                                    
x
+
TestCertExpiration (251.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-635698 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1207 20:39:04.781001    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-635698 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (41.536255363s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-635698 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-635698 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (27.256477553s)
helpers_test.go:175: Cleaning up "cert-expiration-635698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-635698
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-635698: (2.246469114s)
--- PASS: TestCertExpiration (251.04s)

                                                
                                    
x
+
TestDockerFlags (48.74s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-996478 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-996478 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.2536103s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-996478 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-996478 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-996478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-996478
E1207 20:39:16.130528    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-996478: (2.527626701s)
--- PASS: TestDockerFlags (48.74s)

                                                
                                    
x
+
TestForceSystemdFlag (45.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-968747 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-968747 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.261207557s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-968747 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-968747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-968747
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-968747: (2.271765612s)
--- PASS: TestForceSystemdFlag (45.90s)

                                                
                                    
x
+
TestForceSystemdEnv (47.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-286550 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-286550 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.039873539s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-286550 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-286550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-286550
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-286550: (2.616125958s)
--- PASS: TestForceSystemdEnv (47.23s)

                                                
                                    
x
+
TestErrorSpam/setup (32.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-028683 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-028683 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-028683 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-028683 --driver=docker  --container-runtime=docker: (32.914564091s)
--- PASS: TestErrorSpam/setup (32.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (11.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 stop: (10.789730283s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-028683 --log_dir /tmp/nospam-028683 stop
--- PASS: TestErrorSpam/stop (11.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17719-2292/.minikube/files/etc/test/nested/copy/7600/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-718233 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-718233 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m17.020840013s)
--- PASS: TestFunctional/serial/StartWithProxy (77.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-718233 --alsologtostderr -v=8
E1207 20:09:16.132551    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:16.140482    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:16.150783    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:16.170987    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:16.211218    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:16.291489    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:16.451978    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:16.772581    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-718233 --alsologtostderr -v=8: (34.962141837s)
functional_test.go:659: soft start took 34.969621627s for "functional-718233" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-718233 get po -A
E1207 20:09:17.413554    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 cache add registry.k8s.io/pause:3.1: (1.075634365s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 cache add registry.k8s.io/pause:3.3
E1207 20:09:18.693929    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 cache add registry.k8s.io/pause:3.3: (1.011967798s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-718233 /tmp/TestFunctionalserialCacheCmdcacheadd_local1610315341/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 cache add minikube-local-cache-test:functional-718233
E1207 20:09:21.254124    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 cache delete minikube-local-cache-test:functional-718233
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-718233
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (342.884517ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 kubectl -- --context functional-718233 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-718233 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-718233 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1207 20:09:26.375103    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:36.615338    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 20:09:57.095600    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-718233 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.26710199s)
functional_test.go:757: restart took 43.267210174s for "functional-718233" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-718233 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 logs: (1.308346895s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 logs --file /tmp/TestFunctionalserialLogsFileCmd2704962308/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 logs --file /tmp/TestFunctionalserialLogsFileCmd2704962308/001/logs.txt: (1.312384104s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-718233 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-718233
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-718233: exit status 115 (1.044018207s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31392 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-718233 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 config get cpus: exit status 14 (121.339199ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 config get cpus: exit status 14 (115.82347ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-718233 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-718233 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48460: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-718233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-718233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (315.236479ms)

                                                
                                                
-- stdout --
	* [functional-718233] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 20:11:02.424471   47670 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:11:02.424903   47670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:11:02.424935   47670 out.go:309] Setting ErrFile to fd 2...
	I1207 20:11:02.424970   47670 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:11:02.425345   47670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	I1207 20:11:02.425938   47670 out.go:303] Setting JSON to false
	I1207 20:11:02.427073   47670 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":3206,"bootTime":1701976657,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1207 20:11:02.427168   47670 start.go:138] virtualization:  
	I1207 20:11:02.432437   47670 out.go:177] * [functional-718233] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1207 20:11:02.434418   47670 notify.go:220] Checking for updates...
	I1207 20:11:02.437214   47670 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:11:02.439654   47670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:11:02.441459   47670 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:11:02.443127   47670 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	I1207 20:11:02.444945   47670 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1207 20:11:02.446878   47670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:11:02.449326   47670 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 20:11:02.449944   47670 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:11:02.489486   47670 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1207 20:11:02.489593   47670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:11:02.632949   47670 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-07 20:11:02.62282042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:11:02.633040   47670 docker.go:295] overlay module found
	I1207 20:11:02.636547   47670 out.go:177] * Using the docker driver based on existing profile
	I1207 20:11:02.638558   47670 start.go:298] selected driver: docker
	I1207 20:11:02.638578   47670 start.go:902] validating driver "docker" against &{Name:functional-718233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-718233 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:11:02.638695   47670 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:11:02.641395   47670 out.go:177] 
	W1207 20:11:02.643473   47670 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 20:11:02.645479   47670 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-718233 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-718233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-718233 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (351.06911ms)

                                                
                                                
-- stdout --
	* [functional-718233] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 20:11:03.219228   47856 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:11:03.219444   47856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:11:03.219449   47856 out.go:309] Setting ErrFile to fd 2...
	I1207 20:11:03.219455   47856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:11:03.220511   47856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	I1207 20:11:03.221051   47856 out.go:303] Setting JSON to false
	I1207 20:11:03.222354   47856 start.go:128] hostinfo: {"hostname":"ip-172-31-29-130","uptime":3207,"bootTime":1701976657,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1207 20:11:03.222490   47856 start.go:138] virtualization:  
	I1207 20:11:03.225282   47856 out.go:177] * [functional-718233] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1207 20:11:03.228615   47856 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 20:11:03.228840   47856 notify.go:220] Checking for updates...
	I1207 20:11:03.230854   47856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 20:11:03.232990   47856 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	I1207 20:11:03.235132   47856 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	I1207 20:11:03.237007   47856 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1207 20:11:03.238911   47856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 20:11:03.241489   47856 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 20:11:03.242119   47856 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 20:11:03.280830   47856 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1207 20:11:03.280934   47856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:11:03.412593   47856 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-12-07 20:11:03.394398117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:11:03.417158   47856 docker.go:295] overlay module found
	I1207 20:11:03.419756   47856 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1207 20:11:03.422277   47856 start.go:298] selected driver: docker
	I1207 20:11:03.422305   47856 start.go:902] validating driver "docker" against &{Name:functional-718233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-718233 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 20:11:03.422410   47856 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 20:11:03.427547   47856 out.go:177] 
	W1207 20:11:03.431621   47856 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 20:11:03.433368   47856 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-718233 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-718233 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-l7sxw" [98e87726-6059-485c-bcbd-524d4e5af23e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-l7sxw" [98e87726-6059-485c-bcbd-524d4e5af23e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.02304426s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31030
functional_test.go:1674: http://192.168.49.2:31030: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-l7sxw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31030
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [29ef5adc-157e-4045-9ff7-d64ed99aaf60] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016079392s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-718233 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-718233 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-718233 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-718233 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e6b9c0ac-aeac-4cb5-9f96-d3f57ebd64e6] Pending
helpers_test.go:344: "sp-pod" [e6b9c0ac-aeac-4cb5-9f96-d3f57ebd64e6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e6b9c0ac-aeac-4cb5-9f96-d3f57ebd64e6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.013056304s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-718233 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-718233 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-718233 delete -f testdata/storage-provisioner/pod.yaml: (1.509250416s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-718233 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d462a91-18c4-4fe2-a54a-e5396844a01d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d462a91-18c4-4fe2-a54a-e5396844a01d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.018101998s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-718233 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh -n functional-718233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 cp functional-718233:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd947108493/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh -n functional-718233 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7600/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo cat /etc/test/nested/copy/7600/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7600.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo cat /etc/ssl/certs/7600.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7600.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo cat /usr/share/ca-certificates/7600.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/76002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo cat /etc/ssl/certs/76002.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/76002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo cat /usr/share/ca-certificates/76002.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-718233 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 ssh "sudo systemctl is-active crio": exit status 1 (391.977464ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 version --short
--- PASS: TestFunctional/parallel/Version/short (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 version -o=json --components: (1.124976864s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-718233 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-718233
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-718233
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-718233 image ls --format short --alsologtostderr:
I1207 20:11:11.126868   49266 out.go:296] Setting OutFile to fd 1 ...
I1207 20:11:11.127037   49266 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:11.127043   49266 out.go:309] Setting ErrFile to fd 2...
I1207 20:11:11.127049   49266 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:11.127329   49266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
I1207 20:11:11.128198   49266 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:11.128386   49266 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:11.129031   49266 cli_runner.go:164] Run: docker container inspect functional-718233 --format={{.State.Status}}
I1207 20:11:11.148890   49266 ssh_runner.go:195] Run: systemctl --version
I1207 20:11:11.148949   49266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-718233
I1207 20:11:11.169799   49266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/functional-718233/id_rsa Username:docker}
I1207 20:11:11.262864   49266 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-718233 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-718233 | 05becf7517f72 | 30B    |
| docker.io/library/nginx                     | latest            | 5628e5ea3c17f | 192MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/localhost/my-image                | functional-718233 | ec71d8ceebf7b | 1.41MB |
| docker.io/library/nginx                     | alpine            | f09fc93534f6a | 43.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| gcr.io/google-containers/addon-resizer      | functional-718233 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-718233 image ls --format table --alsologtostderr:
I1207 20:11:14.814712   49681 out.go:296] Setting OutFile to fd 1 ...
I1207 20:11:14.814900   49681 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:14.814912   49681 out.go:309] Setting ErrFile to fd 2...
I1207 20:11:14.814920   49681 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:14.815256   49681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
I1207 20:11:14.815952   49681 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:14.816124   49681 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:14.816754   49681 cli_runner.go:164] Run: docker container inspect functional-718233 --format={{.State.Status}}
I1207 20:11:14.834860   49681 ssh_runner.go:195] Run: systemctl --version
I1207 20:11:14.834916   49681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-718233
I1207 20:11:14.853621   49681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/functional-718233/id_rsa Username:docker}
I1207 20:11:14.942439   49681 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/12/07 20:11:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-718233 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"05becf7517f7231dbed32e1444f9bf3b9e9c855e87155ac551674d71c82ff5f2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-718233"],"size":"30"},{"id":"5628e5ea3c17fa1cbf496
92edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"ec71d8ceebf7b8bd5753094e09daab5b05bba06f57dfbdd7dd05f40c7ba27dc2","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-718233"],"size":"1410000"},{"id":"f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8","repoDigests":[],"repoTags":
["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-718233"],"size":"32900000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-sc
heduler:v1.28.4"],"size":"57800000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-718233 image ls --format json --alsologtostderr:
I1207 20:11:14.581770   49655 out.go:296] Setting OutFile to fd 1 ...
I1207 20:11:14.581976   49655 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:14.581988   49655 out.go:309] Setting ErrFile to fd 2...
I1207 20:11:14.581996   49655 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:14.582289   49655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
I1207 20:11:14.582986   49655 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:14.583163   49655 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:14.583892   49655 cli_runner.go:164] Run: docker container inspect functional-718233 --format={{.State.Status}}
I1207 20:11:14.602650   49655 ssh_runner.go:195] Run: systemctl --version
I1207 20:11:14.602706   49655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-718233
I1207 20:11:14.621039   49655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/functional-718233/id_rsa Username:docker}
I1207 20:11:14.710401   49655 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-718233 image ls --format yaml --alsologtostderr:
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 05becf7517f7231dbed32e1444f9bf3b9e9c855e87155ac551674d71c82ff5f2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-718233
size: "30"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-718233
size: "32900000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-718233 image ls --format yaml --alsologtostderr:
I1207 20:11:11.392966   49292 out.go:296] Setting OutFile to fd 1 ...
I1207 20:11:11.393096   49292 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:11.393146   49292 out.go:309] Setting ErrFile to fd 2...
I1207 20:11:11.393153   49292 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:11.393418   49292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
I1207 20:11:11.394144   49292 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:11.394314   49292 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:11.394845   49292 cli_runner.go:164] Run: docker container inspect functional-718233 --format={{.State.Status}}
I1207 20:11:11.414216   49292 ssh_runner.go:195] Run: systemctl --version
I1207 20:11:11.414270   49292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-718233
I1207 20:11:11.435085   49292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/functional-718233/id_rsa Username:docker}
I1207 20:11:11.534590   49292 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 ssh pgrep buildkitd: exit status 1 (399.266736ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image build -t localhost/my-image:functional-718233 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 image build -t localhost/my-image:functional-718233 testdata/build --alsologtostderr: (2.307090603s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-718233 image build -t localhost/my-image:functional-718233 testdata/build --alsologtostderr:
I1207 20:11:12.087938   49373 out.go:296] Setting OutFile to fd 1 ...
I1207 20:11:12.088229   49373 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:12.088259   49373 out.go:309] Setting ErrFile to fd 2...
I1207 20:11:12.088279   49373 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 20:11:12.088646   49373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
I1207 20:11:12.089750   49373 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:12.092188   49373 config.go:182] Loaded profile config "functional-718233": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 20:11:12.092985   49373 cli_runner.go:164] Run: docker container inspect functional-718233 --format={{.State.Status}}
I1207 20:11:12.123454   49373 ssh_runner.go:195] Run: systemctl --version
I1207 20:11:12.123515   49373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-718233
I1207 20:11:12.153054   49373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/functional-718233/id_rsa Username:docker}
I1207 20:11:12.255192   49373 build_images.go:151] Building image from path: /tmp/build.2440456519.tar
I1207 20:11:12.255262   49373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 20:11:12.271587   49373 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2440456519.tar
I1207 20:11:12.277958   49373 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2440456519.tar: stat -c "%s %y" /var/lib/minikube/build/build.2440456519.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2440456519.tar': No such file or directory
I1207 20:11:12.278005   49373 ssh_runner.go:362] scp /tmp/build.2440456519.tar --> /var/lib/minikube/build/build.2440456519.tar (3072 bytes)
I1207 20:11:12.316813   49373 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2440456519
I1207 20:11:12.329253   49373 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2440456519 -xf /var/lib/minikube/build/build.2440456519.tar
I1207 20:11:12.351300   49373 docker.go:346] Building image: /var/lib/minikube/build/build.2440456519
I1207 20:11:12.351389   49373 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-718233 /var/lib/minikube/build/build.2440456519
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.8s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ec71d8ceebf7b8bd5753094e09daab5b05bba06f57dfbdd7dd05f40c7ba27dc2 done
#8 naming to localhost/my-image:functional-718233 done
#8 DONE 0.0s
I1207 20:11:14.253634   49373 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-718233 /var/lib/minikube/build/build.2440456519: (1.902216874s)
I1207 20:11:14.253708   49373 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2440456519
I1207 20:11:14.266434   49373 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2440456519.tar
I1207 20:11:14.277386   49373 build_images.go:207] Built localhost/my-image:functional-718233 from /tmp/build.2440456519.tar
I1207 20:11:14.277412   49373 build_images.go:123] succeeded building to: functional-718233
I1207 20:11:14.277417   49373 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.77484262s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-718233
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-718233 docker-env) && out/minikube-linux-arm64 status -p functional-718233"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-718233 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image load --daemon gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 image load --daemon gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr: (4.131498742s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-718233 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-718233 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-z9547" [3d95f42b-40b1-4574-9449-a48a6f2a47f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-z9547" [3d95f42b-40b1-4574-9449-a48a6f2a47f6] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.033007657s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image load --daemon gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 image load --daemon gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr: (2.757536372s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.862278543s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-718233
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image load --daemon gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 image load --daemon gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr: (3.282099927s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image save gcr.io/google-containers/addon-resizer:functional-718233 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 image save gcr.io/google-containers/addon-resizer:functional-718233 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.100986559s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 service list -o json
functional_test.go:1493: Took "659.217465ms" to run "out/minikube-linux-arm64 -p functional-718233 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image rm gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32763
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.628577296s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32763
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-718233
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 image save --daemon gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-718233 image save --daemon gcr.io/google-containers/addon-resizer:functional-718233 --alsologtostderr: (1.32920239s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-718233
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-718233 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-718233 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-718233 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45301: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-718233 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-718233 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-718233 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e3d2e616-c6b3-4acd-9e5a-6972bf8fb4be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1207 20:10:38.055841    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [e3d2e616-c6b3-4acd-9e5a-6972bf8fb4be] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.025331599s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-718233 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.207.53 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-718233 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "513.011986ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "119.201527ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "402.14676ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "90.251128ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdany-port1997019987/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701979855638107225" to /tmp/TestFunctionalparallelMountCmdany-port1997019987/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701979855638107225" to /tmp/TestFunctionalparallelMountCmdany-port1997019987/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701979855638107225" to /tmp/TestFunctionalparallelMountCmdany-port1997019987/001/test-1701979855638107225
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (527.369693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 20:10 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 20:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 20:10 test-1701979855638107225
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh cat /mount-9p/test-1701979855638107225
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-718233 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [139a096d-d5cb-4f80-b1a1-db625e891e1a] Pending
helpers_test.go:344: "busybox-mount" [139a096d-d5cb-4f80-b1a1-db625e891e1a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [139a096d-d5cb-4f80-b1a1-db625e891e1a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [139a096d-d5cb-4f80-b1a1-db625e891e1a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.019653668s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-718233 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdany-port1997019987/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdspecific-port2300480894/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (540.007234ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdspecific-port2300480894/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 ssh "sudo umount -f /mount-9p": exit status 1 (447.000233ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-718233 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdspecific-port2300480894/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup314121209/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup314121209/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup314121209/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T" /mount1: exit status 1 (1.140243265s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-718233 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-718233 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup314121209/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup314121209/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-718233 /tmp/TestFunctionalparallelMountCmdVerifyCleanup314121209/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.02s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-718233
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-718233
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-718233
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (33.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-570451 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-570451 --driver=docker  --container-runtime=docker: (33.773024975s)
--- PASS: TestImageBuild/serial/Setup (33.77s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-570451
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-570451: (1.815727192s)
--- PASS: TestImageBuild/serial/NormalBuild (1.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-570451
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-570451
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-570451
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.74s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (113.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-362953 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1207 20:11:59.976044    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-362953 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m53.210603354s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (113.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-362953 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-362953 addons enable ingress --alsologtostderr -v=5: (11.054373699s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-362953 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-362953 addons enable ingress-dns --alsologtostderr -v=5: (1.790934145s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (1.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-576378 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E1207 20:15:21.245716    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:21.251004    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:21.261345    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:21.281681    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:21.321952    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:21.402251    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:21.562700    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:21.883322    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:22.523620    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:23.803848    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:26.364690    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:31.485151    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:15:41.725429    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:16:02.205628    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-576378 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m29.895230212s)
--- PASS: TestJSONOutput/start/Command (89.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-576378 --output=json --user=testUser
E1207 20:16:43.166291    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-576378 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-576378 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-576378 --output=json --user=testUser: (5.80006863s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-602189 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-602189 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.825575ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1876fdf4-5bd4-4706-ba8e-de466840d068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-602189] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c0c94f1-36b4-4784-9cbc-afef0fb4857e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17719"}}
	{"specversion":"1.0","id":"88c58492-34c1-4596-b56d-9b90f3a17207","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0fb6730e-0e0e-4ae1-9e3b-ee9ab292729f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig"}}
	{"specversion":"1.0","id":"92ae4b52-d2dd-4acc-937e-ec0595d7ad1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube"}}
	{"specversion":"1.0","id":"c2b4ad04-d793-44cd-8f19-95b1f5fa4848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5bd3208b-b256-4a4e-b3ec-34ea6ea15073","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"96afedf6-83b3-448c-b726-83b3d7890af3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-602189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-602189
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-716717 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-716717 --network=: (35.819176625s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-716717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-716717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-716717: (2.290844854s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.15s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-025175 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-025175 --network=bridge: (33.240996666s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-025175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-025175
E1207 20:18:05.087346    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-025175: (2.098973336s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.36s)

                                                
                                    
x
+
TestKicExistingNetwork (34.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-373379 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-373379 --network=existing-network: (32.300792088s)
helpers_test.go:175: Cleaning up "existing-network-373379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-373379
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-373379: (1.682025629s)
--- PASS: TestKicExistingNetwork (34.15s)

                                                
                                    
x
+
TestKicCustomSubnet (36.47s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-615593 --subnet=192.168.60.0/24
E1207 20:19:04.781091    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:04.786354    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:04.796581    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:04.816906    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:04.857149    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:04.937429    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:05.097802    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:05.418299    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:06.059084    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:07.339339    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:09.899515    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-615593 --subnet=192.168.60.0/24: (34.344350197s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-615593 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-615593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-615593
E1207 20:19:15.019773    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:16.130802    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-615593: (2.102072477s)
--- PASS: TestKicCustomSubnet (36.47s)

                                                
                                    
x
+
TestKicStaticIP (35.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-616215 --static-ip=192.168.200.200
E1207 20:19:25.260836    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:19:45.741964    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-616215 --static-ip=192.168.200.200: (33.139892507s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-616215 ip
helpers_test.go:175: Cleaning up "static-ip-616215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-616215
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-616215: (2.158473534s)
--- PASS: TestKicStaticIP (35.48s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (74.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-561792 --driver=docker  --container-runtime=docker
E1207 20:20:21.245334    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-561792 --driver=docker  --container-runtime=docker: (33.199598728s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-564401 --driver=docker  --container-runtime=docker
E1207 20:20:26.702267    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:20:48.928160    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-564401 --driver=docker  --container-runtime=docker: (35.415638399s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-561792
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-564401
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-564401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-564401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-564401: (2.118183278s)
helpers_test.go:175: Cleaning up "first-561792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-561792
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-561792: (2.299660047s)
--- PASS: TestMinikubeProfile (74.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-244900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-244900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.036662237s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-244900 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-246967 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-246967 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.278893216s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-246967 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-244900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-244900 --alsologtostderr -v=5: (1.533739277s)
--- PASS: TestMountStart/serial/DeleteFirst (1.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-246967 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-246967
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-246967: (1.244864235s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-246967
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-246967: (8.417838054s)
--- PASS: TestMountStart/serial/RestartStopped (9.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-246967 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (80.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-224513 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1207 20:21:48.622767    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-224513 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m19.979727522s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (80.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (55.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-224513 -- rollout status deployment/busybox: (2.983185281s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-f26zh -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-hs5rm -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-f26zh -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-hs5rm -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-f26zh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-hs5rm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (55.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-f26zh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-f26zh -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-hs5rm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-224513 -- exec busybox-5bc68d56bd-hs5rm -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.29s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-224513 -v 3 --alsologtostderr
E1207 20:24:04.780950    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:24:16.130096    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-224513 -v 3 --alsologtostderr: (20.943499674s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-224513 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp testdata/cp-test.txt multinode-224513:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile694666210/001/cp-test_multinode-224513.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513:/home/docker/cp-test.txt multinode-224513-m02:/home/docker/cp-test_multinode-224513_multinode-224513-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m02 "sudo cat /home/docker/cp-test_multinode-224513_multinode-224513-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513:/home/docker/cp-test.txt multinode-224513-m03:/home/docker/cp-test_multinode-224513_multinode-224513-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m03 "sudo cat /home/docker/cp-test_multinode-224513_multinode-224513-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp testdata/cp-test.txt multinode-224513-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile694666210/001/cp-test_multinode-224513-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513-m02:/home/docker/cp-test.txt multinode-224513:/home/docker/cp-test_multinode-224513-m02_multinode-224513.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513 "sudo cat /home/docker/cp-test_multinode-224513-m02_multinode-224513.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513-m02:/home/docker/cp-test.txt multinode-224513-m03:/home/docker/cp-test_multinode-224513-m02_multinode-224513-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m03 "sudo cat /home/docker/cp-test_multinode-224513-m02_multinode-224513-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp testdata/cp-test.txt multinode-224513-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile694666210/001/cp-test_multinode-224513-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513-m03:/home/docker/cp-test.txt multinode-224513:/home/docker/cp-test_multinode-224513-m03_multinode-224513.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513 "sudo cat /home/docker/cp-test_multinode-224513-m03_multinode-224513.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 cp multinode-224513-m03:/home/docker/cp-test.txt multinode-224513-m02:/home/docker/cp-test_multinode-224513-m03_multinode-224513-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 ssh -n multinode-224513-m02 "sudo cat /home/docker/cp-test_multinode-224513-m03_multinode-224513-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-224513 node stop m03: (1.25251176s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-224513 status: exit status 7 (581.954155ms)

                                                
                                                
-- stdout --
	multinode-224513
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-224513-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-224513-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-224513 status --alsologtostderr: exit status 7 (572.5787ms)

                                                
                                                
-- stdout --
	multinode-224513
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-224513-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-224513-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 20:24:30.937312  115265 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:24:30.937477  115265 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:24:30.937488  115265 out.go:309] Setting ErrFile to fd 2...
	I1207 20:24:30.937494  115265 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:24:30.937754  115265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	I1207 20:24:30.937934  115265 out.go:303] Setting JSON to false
	I1207 20:24:30.937998  115265 mustload.go:65] Loading cluster: multinode-224513
	I1207 20:24:30.938118  115265 notify.go:220] Checking for updates...
	I1207 20:24:30.938549  115265 config.go:182] Loaded profile config "multinode-224513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 20:24:30.938561  115265 status.go:255] checking status of multinode-224513 ...
	I1207 20:24:30.939164  115265 cli_runner.go:164] Run: docker container inspect multinode-224513 --format={{.State.Status}}
	I1207 20:24:30.963671  115265 status.go:330] multinode-224513 host status = "Running" (err=<nil>)
	I1207 20:24:30.963703  115265 host.go:66] Checking if "multinode-224513" exists ...
	I1207 20:24:30.964003  115265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-224513
	I1207 20:24:30.994350  115265 host.go:66] Checking if "multinode-224513" exists ...
	I1207 20:24:30.994659  115265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 20:24:30.994700  115265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-224513
	I1207 20:24:31.021965  115265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/multinode-224513/id_rsa Username:docker}
	I1207 20:24:31.115657  115265 ssh_runner.go:195] Run: systemctl --version
	I1207 20:24:31.121861  115265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:24:31.136539  115265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1207 20:24:31.209411  115265 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-12-07 20:24:31.200071413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215044096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1207 20:24:31.210045  115265 kubeconfig.go:92] found "multinode-224513" server: "https://192.168.58.2:8443"
	I1207 20:24:31.210065  115265 api_server.go:166] Checking apiserver status ...
	I1207 20:24:31.210106  115265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 20:24:31.223488  115265 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2218/cgroup
	I1207 20:24:31.234533  115265 api_server.go:182] apiserver freezer: "2:freezer:/docker/2aa723364b5767602827449b2daf7caefe2976f09255f7392ba5936b0a0bb608/kubepods/burstable/pod8ba5ea816cbe50d9bec1c6bc311a4e2a/21e336c5543380b4f7333ad7831ef8c3e12516b35724336ac5a5745ee562213c"
	I1207 20:24:31.234603  115265 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2aa723364b5767602827449b2daf7caefe2976f09255f7392ba5936b0a0bb608/kubepods/burstable/pod8ba5ea816cbe50d9bec1c6bc311a4e2a/21e336c5543380b4f7333ad7831ef8c3e12516b35724336ac5a5745ee562213c/freezer.state
	I1207 20:24:31.245036  115265 api_server.go:204] freezer state: "THAWED"
	I1207 20:24:31.245073  115265 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1207 20:24:31.254067  115265 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1207 20:24:31.254098  115265 status.go:421] multinode-224513 apiserver status = Running (err=<nil>)
	I1207 20:24:31.254109  115265 status.go:257] multinode-224513 status: &{Name:multinode-224513 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 20:24:31.254137  115265 status.go:255] checking status of multinode-224513-m02 ...
	I1207 20:24:31.254444  115265 cli_runner.go:164] Run: docker container inspect multinode-224513-m02 --format={{.State.Status}}
	I1207 20:24:31.272196  115265 status.go:330] multinode-224513-m02 host status = "Running" (err=<nil>)
	I1207 20:24:31.272223  115265 host.go:66] Checking if "multinode-224513-m02" exists ...
	I1207 20:24:31.272512  115265 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-224513-m02
	I1207 20:24:31.291409  115265 host.go:66] Checking if "multinode-224513-m02" exists ...
	I1207 20:24:31.291709  115265 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1207 20:24:31.291755  115265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-224513-m02
	I1207 20:24:31.308642  115265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/17719-2292/.minikube/machines/multinode-224513-m02/id_rsa Username:docker}
	I1207 20:24:31.399156  115265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 20:24:31.412487  115265 status.go:257] multinode-224513-m02 status: &{Name:multinode-224513-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1207 20:24:31.412531  115265 status.go:255] checking status of multinode-224513-m03 ...
	I1207 20:24:31.412889  115265 cli_runner.go:164] Run: docker container inspect multinode-224513-m03 --format={{.State.Status}}
	I1207 20:24:31.431165  115265 status.go:330] multinode-224513-m03 host status = "Stopped" (err=<nil>)
	I1207 20:24:31.431185  115265 status.go:343] host is not running, skipping remaining checks
	I1207 20:24:31.431193  115265 status.go:257] multinode-224513-m03 status: &{Name:multinode-224513-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (14.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 node start m03 --alsologtostderr
E1207 20:24:32.463694    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-224513 node start m03 --alsologtostderr: (13.344005119s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (14.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-224513
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-224513
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-224513: (22.71393592s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-224513 --wait=true -v=8 --alsologtostderr
E1207 20:25:21.244208    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:25:39.176803    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-224513 --wait=true -v=8 --alsologtostderr: (1m37.746216848s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-224513
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-224513 node delete m03: (4.556892384s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-224513 stop: (21.724557573s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-224513 status: exit status 7 (120.980225ms)

                                                
                                                
-- stdout --
	multinode-224513
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-224513-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-224513 status --alsologtostderr: exit status 7 (108.174844ms)

                                                
                                                
-- stdout --
	multinode-224513
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-224513-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 20:27:13.717917  131483 out.go:296] Setting OutFile to fd 1 ...
	I1207 20:27:13.718132  131483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:27:13.718165  131483 out.go:309] Setting ErrFile to fd 2...
	I1207 20:27:13.718185  131483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 20:27:13.718566  131483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17719-2292/.minikube/bin
	I1207 20:27:13.718797  131483 out.go:303] Setting JSON to false
	I1207 20:27:13.718900  131483 mustload.go:65] Loading cluster: multinode-224513
	I1207 20:27:13.718952  131483 notify.go:220] Checking for updates...
	I1207 20:27:13.719407  131483 config.go:182] Loaded profile config "multinode-224513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 20:27:13.719440  131483 status.go:255] checking status of multinode-224513 ...
	I1207 20:27:13.720407  131483 cli_runner.go:164] Run: docker container inspect multinode-224513 --format={{.State.Status}}
	I1207 20:27:13.738811  131483 status.go:330] multinode-224513 host status = "Stopped" (err=<nil>)
	I1207 20:27:13.738829  131483 status.go:343] host is not running, skipping remaining checks
	I1207 20:27:13.738836  131483 status.go:257] multinode-224513 status: &{Name:multinode-224513 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1207 20:27:13.738871  131483 status.go:255] checking status of multinode-224513-m02 ...
	I1207 20:27:13.739162  131483 cli_runner.go:164] Run: docker container inspect multinode-224513-m02 --format={{.State.Status}}
	I1207 20:27:13.758147  131483 status.go:330] multinode-224513-m02 host status = "Stopped" (err=<nil>)
	I1207 20:27:13.758169  131483 status.go:343] host is not running, skipping remaining checks
	I1207 20:27:13.758177  131483 status.go:257] multinode-224513-m02 status: &{Name:multinode-224513-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-224513 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-224513 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.993405752s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-224513 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-224513
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-224513-m02 --driver=docker  --container-runtime=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-224513-m02 --driver=docker  --container-runtime=docker: exit status 14 (91.668779ms)

                                                
                                                
-- stdout --
	* [multinode-224513-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-224513-m02' is duplicated with machine name 'multinode-224513-m02' in profile 'multinode-224513'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-224513-m03 --driver=docker  --container-runtime=docker
E1207 20:29:04.780833    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-224513-m03 --driver=docker  --container-runtime=docker: (33.270684902s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-224513
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-224513: exit status 80 (342.47632ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-224513
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-224513-m03 already exists in multinode-224513-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-224513-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-224513-m03: (2.326908304s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.10s)

                                                
                                    
x
+
TestPreload (180.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-684209 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E1207 20:30:21.244090    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-684209 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m49.648842928s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-684209 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-684209 image pull gcr.io/k8s-minikube/busybox: (1.327631549s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-684209
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-684209: (10.820662162s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-684209 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1207 20:31:44.288579    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-684209 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (56.409420526s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-684209 image list
helpers_test.go:175: Cleaning up "test-preload-684209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-684209
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-684209: (2.275083977s)
--- PASS: TestPreload (180.72s)

                                                
                                    
x
+
TestScheduledStopUnix (109s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-053736 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-053736 --memory=2048 --driver=docker  --container-runtime=docker: (35.494716228s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-053736 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-053736 -n scheduled-stop-053736
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-053736 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-053736 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-053736 -n scheduled-stop-053736
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-053736
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-053736 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1207 20:34:04.781701    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-053736
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-053736: exit status 7 (96.251084ms)

                                                
                                                
-- stdout --
	scheduled-stop-053736
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-053736 -n scheduled-stop-053736
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-053736 -n scheduled-stop-053736: exit status 7 (89.967249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-053736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-053736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-053736: (1.721582225s)
--- PASS: TestScheduledStopUnix (109.00s)

                                                
                                    
x
+
TestSkaffold (109.71s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2816656064 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-014348 --memory=2600 --driver=docker  --container-runtime=docker
E1207 20:34:16.130831    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-014348 --memory=2600 --driver=docker  --container-runtime=docker: (32.570995862s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2816656064 run --minikube-profile skaffold-014348 --kube-context skaffold-014348 --status-check=true --port-forward=false --interactive=false
E1207 20:35:21.244830    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:35:27.824825    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2816656064 run --minikube-profile skaffold-014348 --kube-context skaffold-014348 --status-check=true --port-forward=false --interactive=false: (1m1.669257194s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5688665667-bmpwg" [924ebd9f-c496-4ed1-88ff-672dfcc8bda3] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.026876364s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-758fb7b49c-gc9k6" [d6494045-bac2-41bc-b647-302ca451a867] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009586276s
helpers_test.go:175: Cleaning up "skaffold-014348" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-014348
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-014348: (3.02331855s)
--- PASS: TestSkaffold (109.71s)

                                                
                                    
x
+
TestInsufficientStorage (11.77s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-978646 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-978646 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.32576093s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8c41848c-cbd5-481f-9843-0d0c240e7fca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-978646] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fff8bc44-e18e-49b7-9a36-638de3c004a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17719"}}
	{"specversion":"1.0","id":"5592b0dd-bddd-4317-b76a-0a69967e0955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c6402ee-7163-432b-be51-944ce0f2f1ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig"}}
	{"specversion":"1.0","id":"7bc769e8-2dbb-4a14-b30f-f61e5ee6f9b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube"}}
	{"specversion":"1.0","id":"5b4407d6-6aea-4d7a-83a5-0d6a751bc72b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"93e0bccd-ea51-4e98-a74d-c16214483be1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6a482401-490b-4a25-b0fe-adc9494f7cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5e91af07-3639-4bd3-9b40-7e7c2365fbc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"82adaf19-4157-4f33-8901-68bc48a08028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"426541a5-1c89-438d-81ee-d8e7befab529","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"991c41ba-cf89-4daa-b937-3a0f57c2f624","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-978646 in cluster insufficient-storage-978646","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"09e5ffaa-77d0-4b07-b744-1182323fc138","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"71be8dec-d634-4913-81a8-f1ad0cac06e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ec47096-9908-421f-83a9-f2a46195323b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-978646 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-978646 --output=json --layout=cluster: exit status 7 (332.189584ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-978646","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-978646","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 20:36:08.396929  167670 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-978646" does not appear in /home/jenkins/minikube-integration/17719-2292/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-978646 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-978646 --output=json --layout=cluster: exit status 7 (320.481729ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-978646","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-978646","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 20:36:08.720157  167720 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-978646" does not appear in /home/jenkins/minikube-integration/17719-2292/kubeconfig
	E1207 20:36:08.732132  167720 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/insufficient-storage-978646/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-978646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-978646
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-978646: (1.792967922s)
--- PASS: TestInsufficientStorage (11.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (135.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3096826284.exe start -p running-upgrade-845214 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3096826284.exe start -p running-upgrade-845214 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m34.682764467s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-845214 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1207 20:49:04.781805    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:49:16.130808    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-845214 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.0227172s)
helpers_test.go:175: Cleaning up "running-upgrade-845214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-845214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-845214: (2.83532526s)
--- PASS: TestRunningBinaryUpgrade (135.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (192.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-771944 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1207 20:45:21.243985    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:45:45.682905    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-771944 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.157374142s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-771944
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-771944: (1.296533163s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-771944 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-771944 status --format={{.Host}}: exit status 7 (86.984674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-771944 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1207 20:46:13.366143    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-771944 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m37.037417393s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-771944 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-771944 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-771944 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (116.026975ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-771944] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-771944
	    minikube start -p kubernetes-upgrade-771944 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7719442 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-771944 --kubernetes-version=v1.29.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-771944 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-771944 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.626898162s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-771944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-771944
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-771944: (3.006862608s)
--- PASS: TestKubernetesUpgrade (192.45s)

                                                
                                    
x
+
TestMissingContainerUpgrade (117.88s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2456407974.exe start -p missing-upgrade-883002 --memory=2200 --driver=docker  --container-runtime=docker
E1207 20:43:29.525945    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.2456407974.exe start -p missing-upgrade-883002 --memory=2200 --driver=docker  --container-runtime=docker: (56.637643357s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-883002
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-883002: (1.091964246s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-883002
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-883002 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1207 20:44:04.781168    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-883002 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (56.557249169s)
helpers_test.go:175: Cleaning up "missing-upgrade-883002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-883002
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-883002: (2.381680086s)
--- PASS: TestMissingContainerUpgrade (117.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-843941 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-843941 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (116.445149ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-843941] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17719-2292/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17719-2292/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-843941 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-843941 --driver=docker  --container-runtime=docker: (46.670938832s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-843941 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-843941 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-843941 --no-kubernetes --driver=docker  --container-runtime=docker: (15.570875962s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-843941 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-843941 status -o json: exit status 2 (346.602468ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-843941","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-843941
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-843941: (1.865507772s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-843941 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-843941 --no-kubernetes --driver=docker  --container-runtime=docker: (7.292345582s)
--- PASS: TestNoKubernetes/serial/Start (7.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-843941 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-843941 "sudo systemctl is-active --quiet service kubelet": exit status 1 (317.551895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-843941
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-843941: (1.248588311s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-843941 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-843941 --driver=docker  --container-runtime=docker: (8.680326464s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-843941 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-843941 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.684073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                    
x
+
TestPause/serial/Start (57.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-291693 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1207 20:48:24.289443    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-291693 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (57.543230595s)
--- PASS: TestPause/serial/Start (57.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-291693 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-291693 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.851716216s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.88s)

                                                
                                    
x
+
TestPause/serial/Pause (0.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-291693 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.99s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-291693 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-291693 --output=json --layout=cluster: exit status 2 (375.229718ms)

                                                
                                                
-- stdout --
	{"Name":"pause-291693","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-291693","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-291693 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.24s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-291693 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-291693 --alsologtostderr -v=5: (1.244906187s)
--- PASS: TestPause/serial/PauseAgain (1.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m37.479220283s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.48s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-291693 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-291693 --alsologtostderr -v=5: (2.851804631s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-291693
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-291693: exit status 1 (18.70167ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-291693: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E1207 20:50:21.244919    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 20:50:45.682008    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m11.120401522s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9l5qw" [8a2a93bb-17a6-462e-ac75-9bb12ca9bad0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.034021218s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z2lc6" [e732a442-5572-45af-9e5d-53e8400afc30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z2lc6" [e732a442-5572-45af-9e5d-53e8400afc30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.01404661s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-590458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cq9mq" [9a75331d-2e28-45a0-af3d-8b7216519785] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cq9mq" [9a75331d-2e28-45a0-af3d-8b7216519785] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.01686545s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-590458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m25.642164433s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1207 20:52:07.825905    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m12.324528802s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5s694" [b2b1191b-f1df-42a2-b9bd-49280846bbf0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.032587942s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nh94k" [38501774-40eb-4736-ac09-2aea23282d27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nh94k" [38501774-40eb-4736-ac09-2aea23282d27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.02972191s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6fhcv" [284af30c-cda9-4d5f-8744-36e27e54e065] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6fhcv" [284af30c-cda9-4d5f-8744-36e27e54e065] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010274359s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-590458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-590458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m34.750152762s)
--- PASS: TestNetworkPlugins/group/false/Start (94.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1207 20:54:04.781811    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 20:54:16.130625    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (57.993595006s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-69fpk" [ed38f1b9-6e47-4c87-a002-583654b1ca45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-69fpk" [ed38f1b9-6e47-4c87-a002-583654b1ca45] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.012694707s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-590458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m11.240176586s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-28g2c" [f158947e-24b9-4974-b38f-55cd196c92d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-28g2c" [f158947e-24b9-4974-b38f-55cd196c92d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.01550112s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-590458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1207 20:56:17.405906    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
E1207 20:56:20.483231    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:20.488472    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:20.498711    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:20.518950    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:20.559202    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:20.639436    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:20.799594    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:21.119818    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:21.760944    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:23.041195    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:25.601948    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 20:56:30.722260    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (53.923340306s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-qv9lx" [c7601d3f-10e2-4c19-b1cc-e05153e44f9c] Running
E1207 20:56:37.886811    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
E1207 20:56:40.962801    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.029938415s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zhlpn" [3b9bbc4e-4687-436b-a1d4-6401e3157867] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zhlpn" [3b9bbc4e-4687-436b-a1d4-6401e3157867] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.018065092s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-590458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6cpbv" [d8f5d382-7911-4a70-87d5-581c87bf3391] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1207 20:57:08.726694    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6cpbv" [d8f5d382-7911-4a70-87d5-581c87bf3391] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.01067467s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-590458 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-590458 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.331995413s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-590458 exec deployment/netcat -- nslookup kubernetes.default
E1207 20:57:42.404654    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
net_test.go:175: (dbg) Done: kubectl --context bridge-590458 exec deployment/netcat -- nslookup kubernetes.default: (10.273476547s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (87.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-590458 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m27.169709817s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (87.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (125.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-999300 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1207 20:58:09.592420    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 20:58:11.449985    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:11.455242    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:11.465485    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:11.485796    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:11.526037    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:11.606342    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:11.766681    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:12.087210    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:12.727402    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:14.007652    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:14.713601    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 20:58:16.568127    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:21.688568    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:24.954081    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 20:58:31.929012    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:58:40.768677    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
E1207 20:58:45.434296    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-999300 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m5.267521053s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (125.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-590458 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-590458 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5c2fg" [165aec7f-4926-4d47-86f0-a4b97ac9fa1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1207 20:58:52.409932    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5c2fg" [165aec7f-4926-4d47-86f0-a4b97ac9fa1f] Running
E1207 20:58:59.178097    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.019508021s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-590458 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-590458 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.27s)
E1207 21:14:16.130109    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 21:14:27.517148    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 21:14:34.492017    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 21:14:53.075490    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 21:15:14.707361    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:15:21.069741    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:15:21.244049    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 21:15:28.577083    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:15:39.178730    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 21:15:42.390332    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:15:45.681946    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 21:15:48.757084    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:15:56.922815    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
E1207 21:16:16.121331    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 21:16:20.483203    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 21:16:36.775553    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-998622 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1
E1207 20:59:26.394548    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 20:59:33.371099    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 20:59:53.075242    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:53.080514    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:53.090778    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:53.111025    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:53.151270    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:53.231512    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:53.392259    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:53.712937    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:54.353792    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:55.634435    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 20:59:58.195406    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 21:00:03.316433    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 21:00:13.557333    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-998622 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1: (56.368261774s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-999300 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1390dbea-3bd4-46d6-94cf-eb3ff8b6153f] Pending
helpers_test.go:344: "busybox" [1390dbea-3bd4-46d6-94cf-eb3ff8b6153f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1390dbea-3bd4-46d6-94cf-eb3ff8b6153f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.028852863s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-999300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-998622 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1decdb4b-fe4d-4ecd-bb60-70e91403f5c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1207 21:00:21.244335    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1decdb4b-fe4d-4ecd-bb60-70e91403f5c6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.049117107s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-998622 exec busybox -- /bin/sh -c "ulimit -n"
E1207 21:00:29.214486    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-999300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-999300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-999300 --alsologtostderr -v=3
E1207 21:00:28.576821    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:28.582097    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:28.592372    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:28.612650    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:28.653019    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:28.733345    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:28.893801    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-999300 --alsologtostderr -v=3: (11.074317389s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-998622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1207 21:00:29.855502    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-998622 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-998622 --alsologtostderr -v=3
E1207 21:00:31.136334    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:33.696830    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:34.038353    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-998622 --alsologtostderr -v=3: (11.102692297s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-999300 -n old-k8s-version-999300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-999300 -n old-k8s-version-999300: exit status 7 (91.783046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-999300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (445.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-999300 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1207 21:00:38.817501    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-999300 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (7m25.077423617s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-999300 -n old-k8s-version-999300
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (445.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998622 -n no-preload-998622
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998622 -n no-preload-998622: exit status 7 (137.915004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-998622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (323.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-998622 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1
E1207 21:00:45.682892    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 21:00:48.315311    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 21:00:49.057850    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:00:55.291303    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 21:00:56.922091    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
E1207 21:01:09.538864    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:01:14.998937    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 21:01:20.483766    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 21:01:24.609139    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
E1207 21:01:36.775424    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:36.780721    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:36.790945    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:36.811168    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:36.851440    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:36.931783    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:37.092167    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:37.412744    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:38.053200    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:39.334153    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:41.894808    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:47.015867    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:01:48.165975    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 21:01:50.499676    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:01:57.256253    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:02:04.817945    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:04.823361    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:04.833634    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:04.853900    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:04.894202    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:04.974552    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:05.134954    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:05.455473    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:06.096360    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:07.377175    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:09.937430    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:15.058446    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:17.737153    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:02:25.299235    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:36.919411    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 21:02:45.779957    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:02:58.697818    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:03:04.471259    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 21:03:11.450031    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 21:03:12.420282    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:03:26.740744    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:03:32.156000    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 21:03:39.131852    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
E1207 21:03:48.619531    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:48.624840    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:48.635080    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:48.655419    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:48.695677    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:48.775933    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:48.936268    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:49.256787    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:49.897626    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:51.178828    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:53.739829    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:03:58.860264    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:04:04.781402    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 21:04:09.100545    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:04:16.130814    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 21:04:20.618565    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:04:29.581353    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:04:48.661116    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:04:53.075415    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 21:05:04.290476    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 21:05:10.542515    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:05:20.760582    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
E1207 21:05:21.244796    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 21:05:28.577060    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:05:45.682723    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 21:05:56.260506    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:05:56.922727    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-998622 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1: (5m22.56370033s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-998622 -n no-preload-998622
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (323.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2nshw" [c6963a66-9987-459e-86cd-68030c9c4a89] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2nshw" [c6963a66-9987-459e-86cd-68030c9c4a89] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.030276107s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (15.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2nshw" [c6963a66-9987-459e-86cd-68030c9c4a89] Running
E1207 21:06:20.483281    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013358462s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-998622 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-998622 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-998622 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-998622 --alsologtostderr -v=1: (1.19880002s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998622 -n no-preload-998622
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998622 -n no-preload-998622: exit status 2 (577.223789ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-998622 -n no-preload-998622
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-998622 -n no-preload-998622: exit status 2 (539.226028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-998622 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-998622 --alsologtostderr -v=1: (1.114726568s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-998622 -n no-preload-998622
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-998622 -n no-preload-998622
--- PASS: TestStartStop/group/no-preload/serial/Pause (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-453080 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1207 21:06:36.775698    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:07:04.458753    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:07:04.818057    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:07:32.502237    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-453080 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (58.348207139s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-453080 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ffa16da-2928-488f-b7e0-a640d4654773] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ffa16da-2928-488f-b7e0-a640d4654773] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.0406678s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-453080 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-453080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-453080 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.176205899s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-453080 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-453080 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-453080 --alsologtostderr -v=3: (11.012115467s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-453080 -n embed-certs-453080
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-453080 -n embed-certs-453080: exit status 7 (84.578179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-453080 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (351.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-453080 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-453080 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m50.720006102s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-453080 -n embed-certs-453080
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (351.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mkg9d" [6a828254-2bb5-42dc-9019-af534752ce83] Running
E1207 21:08:04.471069    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025917328s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mkg9d" [6a828254-2bb5-42dc-9019-af534752ce83] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010256328s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-999300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-999300 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-999300 --alsologtostderr -v=1
E1207 21:08:11.458443    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-999300 -n old-k8s-version-999300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-999300 -n old-k8s-version-999300: exit status 2 (363.701352ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-999300 -n old-k8s-version-999300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-999300 -n old-k8s-version-999300: exit status 2 (340.297942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-999300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-999300 -n old-k8s-version-999300
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-999300 -n old-k8s-version-999300
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-797160 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1
E1207 21:08:47.826658    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
E1207 21:08:48.619903    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:09:04.781067    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-797160 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1: (52.317464393s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-797160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-797160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.220856756s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-797160 --alsologtostderr -v=3
E1207 21:09:16.130444    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/addons-946218/client.crt: no such file or directory
E1207 21:09:16.304566    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-797160 --alsologtostderr -v=3: (10.999511754s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-797160 -n newest-cni-797160
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-797160 -n newest-cni-797160: exit status 7 (90.136216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-797160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-797160 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1
E1207 21:09:53.074529    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/enable-default-cni-590458/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-797160 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.1: (33.48312637s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-797160 -n newest-cni-797160
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-797160 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-797160 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-797160 -n newest-cni-797160
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-797160 -n newest-cni-797160: exit status 2 (370.667113ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-797160 -n newest-cni-797160
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-797160 -n newest-cni-797160: exit status 2 (390.399951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-797160 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-797160 -n newest-cni-797160
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-797160 -n newest-cni-797160
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-167508 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1207 21:10:14.707509    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:14.712790    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:14.723016    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:14.743309    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:14.784073    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:14.864471    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:15.025130    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:15.345707    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:15.986455    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:17.267432    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:19.827668    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:21.069553    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:21.075291    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:21.085543    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:21.105807    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:21.147875    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:21.228156    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:21.244887    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/functional-718233/client.crt: no such file or directory
E1207 21:10:21.389118    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:21.709729    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:22.350884    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:23.631502    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:24.948484    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:26.192099    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:28.576926    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
E1207 21:10:31.313150    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:35.188674    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:10:41.554063    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:10:45.682295    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
E1207 21:10:55.669351    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-167508 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (52.925759177s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-167508 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [854c14e2-6e1d-4646-94ca-e9313ae54f41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1207 21:10:56.923074    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
helpers_test.go:344: "busybox" [854c14e2-6e1d-4646-94ca-e9313ae54f41] Running
E1207 21:11:02.034621    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.038324082s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-167508 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-167508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-167508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.071009002s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-167508 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-167508 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-167508 --alsologtostderr -v=3: (11.03273348s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508: exit status 7 (101.287788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-167508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (321.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-167508 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1207 21:11:20.483045    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 21:11:36.629580    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:11:36.775889    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/flannel-590458/client.crt: no such file or directory
E1207 21:11:42.995167    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:12:04.817634    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/bridge-590458/client.crt: no such file or directory
E1207 21:12:19.970239    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kindnet-590458/client.crt: no such file or directory
E1207 21:12:43.526922    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/auto-590458/client.crt: no such file or directory
E1207 21:12:58.550073    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/old-k8s-version-999300/client.crt: no such file or directory
E1207 21:13:04.471387    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/calico-590458/client.crt: no such file or directory
E1207 21:13:04.916226    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/no-preload-998622/client.crt: no such file or directory
E1207 21:13:11.450010    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/custom-flannel-590458/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-167508 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m20.692819657s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (321.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j7hnv" [48321bcf-b426-4d55-a5bd-a18c99ec8fe6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1207 21:13:48.619681    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/kubenet-590458/client.crt: no such file or directory
E1207 21:13:48.727873    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/skaffold-014348/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j7hnv" [48321bcf-b426-4d55-a5bd-a18c99ec8fe6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.092566961s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j7hnv" [48321bcf-b426-4d55-a5bd-a18c99ec8fe6] Running
E1207 21:14:04.780932    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/ingress-addon-legacy-362953/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.030965012s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-453080 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-453080 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-453080 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-453080 -n embed-certs-453080
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-453080 -n embed-certs-453080: exit status 2 (375.420796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-453080 -n embed-certs-453080
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-453080 -n embed-certs-453080: exit status 2 (363.305414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-453080 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-453080 -n embed-certs-453080
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-453080 -n embed-certs-453080
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vcxpm" [007a9744-a1ea-450f-8b63-1decf75bab84] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vcxpm" [007a9744-a1ea-450f-8b63-1decf75bab84] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.024459468s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vcxpm" [007a9744-a1ea-450f-8b63-1decf75bab84] Running
E1207 21:16:51.621511    7600 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/false-590458/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010375621s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-167508 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-167508 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-167508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508: exit status 2 (341.575607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508: exit status 2 (359.449166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-167508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-167508 -n default-k8s-diff-port-167508
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    

Test skip (27/330)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.84s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-220646 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-220646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-220646
--- SKIP: TestDownloadOnlyKic (0.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-590458 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-590458" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17719-2292/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 07 Dec 2023 20:37:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: offline-docker-242133
contexts:
- context:
cluster: offline-docker-242133
extensions:
- extension:
last-update: Thu, 07 Dec 2023 20:37:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: offline-docker-242133
name: offline-docker-242133
current-context: offline-docker-242133
kind: Config
preferences: {}
users:
- name: offline-docker-242133
user:
client-certificate: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/offline-docker-242133/client.crt
client-key: /home/jenkins/minikube-integration/17719-2292/.minikube/profiles/offline-docker-242133/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-590458

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-590458" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-590458"

                                                
                                                
----------------------- debugLogs end: cilium-590458 [took: 4.565935347s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-590458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-590458
--- SKIP: TestNetworkPlugins/group/cilium (4.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-496099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-496099
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
Copied to clipboard